Monthly Archives: August 2012

Politics and Science — Ugh

Darn.  I’d really like to keep partisan politics out of science to the degree we can, but it looks like there’s going to be a flare up.  

This is not to say that politics and science shouldn’t mix.  I actually believe the opposite.  Science is a governmental, social, and international endeavor, and so politics is an essential part of the profession — the AAS has its headquarters in Washington for this reason.  Astronomers are encouraged to go to Washington to argue to Congress for funding for our field, and we have an obligation to fight for political and social rights and freedoms of our fellow astronomers (as citizens and moral beings we also should fight for non-astronomers, but that doesn’t have anything to do with astronomy).  Some of our work, such as asteroid watches, solar physics, and atmospheric science, has direct implication for social policy, so we need to wade in there as well.  The AAS does these things well.
But partisan politics that does not touch on these elements should be left alone, because it would unnecessarily divide our community over non-astronomy topics and tarnish our reputations as objective seekers of truth.  The standards of truth in partisan politics are so appallingly low (what will the press print without qualification, what won’t get someone convicted of perjury or defamation) that scientists, with ostensibly high standards for truth and persuasion, cannot help but be sullied by the exercise.  Individual astronomers can, and in many cases should, dive into the fray as citizens, but the AAS and our other official bodies and organizations should not.
On the cover of the latest AAS Newsletter (AAS-members-only until November) our new president, David Helfand, argues that astronomers should fear the growth of entitlement spending and fight to check it for the good of the field and the good of the profession.
Screen shot 2012-08-30 at 2.54.33 PM.png

In the column, Dr. Helfand:
  • clearly identifies the growth of “transfer payments”, which he uses synonymously with “payments to individuals” and “entitlements”, as the primary obstacle to proper levels of spending on science and astronomy now and in the future.  
  • specifically points to the “ratio of investments [including science] to entitlements” which “has changed from 1.15 to 0.27 over the past 50 years.”  
  • shows a chart showing how investments have decreased and payments to individuals have increased over the last 50 years as a fraction of the total federal budget (the data are sourced, the source of the figure is unclear)
  • says that he brought up entitlement growth in the office of “one office[] traditionally very supportive of the scientific enterprise” on Capitol Hill and was told that “even raising the issue would greatly surprise and disturb some of our most loyal allies.”
  • compares those who hold this attitude to climate deniers, analogizing denial of entitlement growth to denial of the growth in concentrations of atmospheric carbon and mean global temperature.
  • attempts to inoculate himself against charges of partisanship by stating “I was careful to avoid advocacy of a particular approach: increased taxes, decreased spending, or, what every bipartisan commission has recommended, a combination of the two.”
Now, if unsustainable growth of spending or decrease in revenue in any area of the federal budget threatens to create such unmanageable debt that astronomy will not be sufficiently funded, then it is certainly appropriate and not at all partisan for the AAS to insist that the problem be fixed.  
However Dr. Helfand’s framing of the issue is so specific, and so reminiscent of talking points most commonly heard from partisan Republicans, that his column reads more like a polemic against the social safety net than a sober assessment of the federal budget.  Let me describe some of the ways in which I feel Dr. Helfand has framed this important issue in an unnecessarily partisan manner:
1) He compares the ratio of investments to one collection of budget items — entitlements — but not to the budget as a whole, including defense. (Note that the lines on his graph conspicuously add to less than 100%).  If Dr. Helfand’s concern is investment spending, surely the objective position would be to consider all other government spending, not just entitlements?  

Or, if his point is that a particular entitlement program threatens the entire budget balance, surely the focus should be on that program and not a broad class of “entitlements” (which includes many small and many solvent programs including student loans, Pell grants, college tax credits, and 529 college savings programs)?

This sort of rhetorical slight-of-hand (a cousin of “socialsecurityandmedicare“) is often used by partisans to use the uncontrolled growth of one program as a justification for making cuts to other, unrelated programs.  I suspect that Dr. Helfand is not intentionally engaged in this sort of deception here (he may not even realize that his column is arguing against the growth of things like Pell grants).  Rather, it appears that he is simply repeating the talking points of Committee Chairman Wolf (R-VA) without digestion.

2) The graph he shows begins in the Sputnik/Apollo era, when investment was at a maximum, and before Johnson’s Great Society initiatives, when entitlement spending ballooned.  The contrast between 1.15 and 0.27 is thus “cherry picked” in two ways: temporally, and with a selective and questionable choice of denominators.  This is to be expected in partisan politics, not in a sober assessment of the overall budget environment.

3) It is not just certain “loyal allies” that are surprised and disturbed by Dr. Helfand’s focus on “entitlements” as a whole; many of his fellow astronomers will be, too.  This is why comparing such attitudes to climate change denial, which is perpetrated by the dishonest and the misinformed, is so outrageous.  Surely Dr. Helfand knows that the charge of being like a denialist is among the most grave and severe he could hurl at a scientist?  Did he not expect that many of his fellow astronomers would be strongly and personally offended by this, or was that in fact his intention?  Neither option reflects well on Dr. Helfand.

4) His openness to tax increases as a solution is actually a non sequitur to his overall argument, and so fails to insulate him from accusations of partisanship:  If uncontrolled, exponential entitlement growth is the problem, then how could increased taxes be anything but a short-term solution?  
Of course, many astronomers support increasing taxes to fully support increased and sustainable entitlements as a desirable step towards a more just, educated, and flourishing society.  Much of Western Europe has made this sort of choice without sacrificing science.  This goal is orthogonal to increased spending on science and investment, not antiparallel to it.  It is also completely consistent with checking unnecessary increases in the health care costs inherent in some entitlement spending.  Perhaps this is Dr. Helfand meant, but one must read his words quite generously to say so. 
Again, my purpose is not to argue with Dr. Helfand, or anyone reading this blog, about the proper future form of the overall federal budget.  I will do that as a citizen but not from my position as an astronomer [if you know me as a citizen, feel free to ask me about it in another forum].  Rather, I wanted to point out that the reason Dr. Helfand will get pushback from his colleagues for this column, and silent stares in the halls of Congress from our “loyal allies”, is that he has apparently bought into a particular, partisan framing of the budget debate.
This framing is what separates the merely political from the partisan.  The idea that unsustainable growth in particular entitlement growth must be checked is, indeed, bipartisan and something the AAS might support:  “ObamaCare” will decrease Medicare growth by $716 billion by reducing payments to HMOs, and Vice Presidential candidate Paul Ryan adopted these savings in earlier versions of his budget plans, which were in turn hailed by Republicans as a model of fiscal prudence.  Rather than agreeing to agree, this point is now a central theme of negative campaigning as both sides accuse the other of “gutting Medicare” (this is not to argue that both political campaigns are equally hypocritical or mendacious on this point, just that the line between politics and partisanship is just a buzzword away).
In his column, Dr. Helfland relates a story wherein Chairman Wolf asks President Elmgreen (his predecessor) why we astronomers are not “up in arms and doing something about entitlements and corporate tax breaks.”  Dr. Helfand continues “The obvious answer is that it is the job of his colleagues to fix those problems, but I think he has a point–because those colleagues report to us.”  I agree with Dr. Helfand that we, both as citizens and through our professional society, should ensure that our nation’s financial future is sound.  But by choosing to vehemently attack government “payments to individuals” as the enemy of astronomy in his capacity as AAS President on the front page of our newsletter he has unnecessarily steered the AAS away from advocacy for science and astronomers and into partisan politics.  I think this was a mistake.  

Science Backstage

Two of my closest collaborators are having what passes for a “fight” in astronomy:  my erstwhie postdoctoral adviser (and contemporary in graduate school at Berkeley) Jamie Lloyd at Cornell, and my good friend and colleague (and contemporary student under Geoff Marcy) John Johnson.  (I’ll point out that there are plenty of spats in our field that would pass for a fight anywhere, maybe even on cable TV, but this sort is more common).

john-johnson-sm1.jpg

John’s graduate thesis was about the frequency of planets orbiting intermediate mass stars, those about 1-2 solar masses.  Such stars are very hard to study on the Main Sequence, where they are called “A stars”, rotate rapidly, and have few spectral lines for us to study for Doppler shifts.  There is a brief period at the end of such a star’s life, however, when it is running out of hydrogen, that its core becomes more luminous and the star begins to reorganize itself internally in response to the new energy trying to get out.  Its surface cools, it develops a convective envelope, a magnetic field develops, it spins down, and it starts to look like a very bright version of the Sun.  In other words, a nearly ideal target for Doppler planet searches.  Astronomers call these stars “subgiants”; in a stroke of marketing genius John dubbed them “retired A stars”.
For the first year of his survey, he was sorely disappointed.  The stars had no hot Jupiters that would make for quick papers, and he dispaired that his thesis would be a statistical analysis of the robustness of his null result:  massive stars have few detectable planets.  But then planets started popping up everywhere!  And big ones!  It turns out that the planets orbiting these massive stars have typical orbits longer than one year but just short enough on the timescale of a graduate student’s career to be useful.  John published his gobs of planets, showing that massive stars have even more planets than Sun-like stars, and from there it was a quick hop, skip, and a jump from a fellowship at Hawaii, to a faculty job at Caltech, to the Pierce Prize (given annually by the AAS to the most outstanding young astronomer;  I guess I should point out that John has a lot of other research notches on his belt, too). 

jamie.jpgWhile all of this was happening, I was looking for work.  Fortunately for me, Jamie Lloyd, a professor at Cornell and someone I had known at Berkeley when I was a younger student, needed someone with planet hunting experience to design the survey and planet-fitting software for TEDI, an experimental Doppler instrument at Palomar.  I spent a year and a half at Cornell working with his group including three people I keep in touch with still:  Angie Wolfgang (now a graduate student at Santa Cruz), Barbara Rojas-Ayala, (now at AMNH), and Phil Muirhead, who is now John’s postdoc! Astronomy is a small, small world;  there is a lesson here about getting to know your coworkers.

Anyway, at Cornell I learned a lot from Jaime about science, funding, advising and being a successful astronomer, which is a big part of the reason I got my job here at Penn State.
A while back, Jaime sent me a draft of a paper he was working on about John’s subgiants.  He was puzzled by a few things that didn’t make sense to him:  why were all of the subgiant-hosted planets orbiting stars with masses greater than 1.3 solar masses?  This seemed odd, since 1.2 solar mass stars should be much more common than 1.5 solar mass stars.  Also, why were some of their rotation periods so anomalous?  Also, didn’t these two stars have obviously miscalculated masses?  Finally, 1.5 solar mass stars are really rare, and such stars that just happen to be near the end of their lives (but not at the end, that is, subgiants, but not giants) should be much, much rarer.  We should be surprised if we find many at all.
Jaime wondered if it was possible that the masses were being incorrectly calculated from the isochrones (basically, an isochrone is a list of properties of stars of varying masses but the same age; we use them to determine the age and mass of a star of given composition).  There could be several reasons for this: the models could be wrong (and, indeed, even the modelers admit that the subgiant and giant tracks for solar mass stars could easily be off by a lot), our measurements of the stars’ temperatures could be wrong (but they would have to be VERY wrong), or something else.  The implication was that John’s stars weren’t actually as massive as he thought.  Jaime speculated about why they might show so many planets:  perhaps those planets were actually false positives caused by the stars’ atmospheres, and the lack of short period planets was due to tidal capture (the star “ate” the planet).  He called his paper “‘Retired’ Planet Hosts: Not So Massive, Maybe Just Portly After Lunch,” and after some back and forth with me and sending a courtesy copy to John for comment, he published it.  Since then, he’s given a about this talk at several places, putting John’s conclusions into doubt throughout the community.
John was never convinced by the paper, but eventually enough people kept asking him about it that he sat down and thought very hard about how the masses of the stars might be wrong.  With a student of his, Tim Morton, they decided to focus on two questions, rather than attempt a point-by-point rebuttal of every claim and assertion in Jaime’s paper (and subsequent talks).  The two questions were:  First, based on everything we know about stellar evolution and the Galaxy, do we really expect to have enough bright 1.3-1.5 solar mass stars in the sky for John’s thesis work to have been possible?  If not, then it’s possible that he was actually looking at some other kind of star.  Second, is there any way that we could mistake a massive star for a less massive star, assuming the models and measurements were correct?
Tim used a program called TRILEGAL to synthesize a Galaxy full of stars and then replicated John’s thesis’s target selection, picking out apparent subgiants using John’s actual search criteria and then looking at their actual masses.  They then performed a Bayesian analysis to determine what the best estimate of the star’s mass should be given measured temperature and abundances, taking into account the fact that massive stars are rare and don’t spend much time on the subgiant branch.  They determined that there was no way to be fooled:  less massive stars are too different from massive stars to be confused for them, and there are plenty of bona fide 1.5 solar mass subgiants bright enough for John to have studied in his thesis.  This careful analysis will lead to a downward revision of the masses of John’s stars, but only by about 5%.
John sent me a copy, and after several readings and back-and-forths about the statistics and interpretation and structure he added me as an author.  We sent a copy to Jaime, and we had a few back-and-forths with him that helped us properly characterize his claims and focus our critique.  
The paper, “Retired A Stars: Truly Massive, Against All Odds”, is available here.  The bottom line is that we suspect that part of Jaime’s error was not accounting for the Malmquist Bias (which John insightfully compares to the Infield Fly Rule in a must-read post on his blog, here).  Also important to note, but not in this paper, because it’s coming out later, is that stars in the 1.1-1.3 solar mass range tend to be too hot as subgiants to make good Doppler targets (which explains the strange distribution of planet host masses, which seem to have too many 1.5 solar mass stars compared to 1.2 solar mass stars).  

It’s been fascinating to be backstage on both sides of a scientific dispute, and to move from “umpire” to “participant”.  I encourage you to read John’s blog post to contrast it with the perspective of being on only one side.  About the “fundamental truth” of the masses of the stars from his thesis, he writes:
That fundamental truth didn’t care about my career or my pride or my dreams. As a scientist I had to step outside of myself, set my pride aside, and seek out the truth.
While you’re there, subscribe to Mahalo.ne.Trash!

Barnard’s Star’s Planets

The great astrometrist Peter van de Kamp discovered exoplanets orbiting the second closest star to Earth (after those in the alpha Centauri system) in the 60’s.  

Barnard’s star — famous among astronomers both for its proximity and its record high speed with which it moves across the sky (“proper” motion) — is about 6 light years away, but is so small and cool that it requires a telescope to spot.  It moves about 10 arcseconds (1/600 of a degree) every year with respect to background stars, meaning that astronomers looking for it must actually calculate where it is this year or they might not be able to find it in their telescope’s small field of view.

Barnard.jpg

Time lapse image of Barnard’s star moving across the background sky by the Backyard Astronomer.  Click through for an animated gif.

In 1986, before any substellar companions had been discovered (Latham’s detection of HD 114783b would be published two years later), Van de Kamp published a book on “dark companions” to nearby stars, in which he summarized the results he had accumulated over the decades.  He detected these companions by tracking star’s motions very carefully as they moved.  He noticed that Barnard’s star seemed to travel in not just a straight line, but to wobble back and forth along its path.  Most of this motion was due to parallax — as the Earth orbits the Sun our light of sight to Barnard’s star changes slightly, and so it seems to make an annual wobble with a full amplitude of about 1 arcsecond.  But van de Kamp saw residuals to this motion, and in 1963 attributed them to the presence of a planet orbiting Barnard’s star.  Van de Kamp reasoned that the planet was basically tugging on Barnard’s star as it went around, causing this wobble.

The measurements van de Kamp were making were very difficult.  Tracing a path across the sky to this level of precision over many decades is almost impossible with photographic plates.  One clue that something might be wrong was that the orbital solutions he found kept changing.  First he had detected a single companion with about one full orbit, but as he collected more data he had to first alter the orbital solution, then later add a second planet to the solution in order to make sense of the residuals.  The fact that his models never seemed to properly predict the future residuals was a clue that perhaps his error bars were underestimated and he was just fitting noise.
Attempts to reproduce van de Kamp’s measurements did not succeed. Gatewood & Eichhorn and Bartlett tried and failed to detect these telltale residuals from archival plates, and an astrometric study with the Hubble Space Telescope similarly came up empty.  The consensus is that van de Kamp was mistaken.
Still, there is always the possibility that it is the modern astronomers, not van de Kamp, who are not being careful enough. This is especially true since the orbital periods of the purported planets (around 12 years and 20 years) are so long and the orbital parameters so uncertain that it takes years of observation to completely confirm or refute all possible planets consistent with van de Kamp’s solution.
Barnard’s star has been on the California Planet Survey programs at Lick Observatory since 1986 and at Keck Observatories since 1997.  Our velocity precision is better than 5 m/s at Keck, and even at Lick our early precision was sufficient to detect van de Kamp’s planets, if they were in an edge-on configuration.  In a paper out on astro-ph today Berkeley student Jieun Choi and Geoff Marcy show that these decades-long radial velocity data sets firmly rule out van de Kamp’s planets, or really any gas giants orbiting Barnard’s Star within 5 AU.
Screen shot 2012-08-14 at 1.08.01 PM.png
We actually bent over backwards to make the purported planets fit the data.  We ignored van de Kamp’s measurements of eccentricities and phases for the planets and found a 2-planet solution with the proper periods where the two planet signals destructively interfere for the duration of the Keck observations.  This means that in principle a pair of planets with those periods could have evaded detection, but only if the system is almost face-on (i > 160 degrees) so the planets produce a signal too small for us to see.  Such a face-on configuration is technically within van de Kamp’s error bars for the inclination, but the phases and eccentricities are quite contrived, and not necessarily consistent with the putative planets’ orbital parameters.
Screen shot 2012-08-14 at 12.21.00 PM.png
So this is all but a dispositive null detection of van de Kamp’s planets (and, combined with the modern-day astrometry, I would say these null detections are truly dispositive).  
But let’s not beat up on van de Kamp, who was a brave pioneer with a proud legacy.  I’ll close with a quote from Choi’s paper, which puts it well:
…Peter van de Kamp remains one of the most respected astrometrists of all time for his observational care, persistence, and ingenuity. But there can be little doubt now that van de Kamp’s two putative planets do not exist.

Infinite Earths, Infinite Books

My post on infinite Earths inspired some discussion with folks who have thought about it, and I thought I’d consolidate my ideas.

The argument, as Derek Fox points out to me, boils down to two assumptions:  infinity and stochasticity (or randomness).  If the Universe is random and if everything in it is basically governed by the same natural laws, subject to true randomness, then it must be true that there is some (very large) distance at which there is an arbitrarily close copy of you.  Either assumption may be wrong, but I think if they are both correct then the argument holds.

Julia Kregenow reminds me that we really don’t know that the Universe is infinite, and in fact we may never know.  This may seem obvious, but it’s actually a recent development.  The Universe has a global curvature that we can measure;  if it has a  “closed” geometry then it must be finite, and if it’s open then it must be infinite.

As an analogy, imagine that you are a sailor on the equator of Kevin Costner’s Waterworld (i.e. almost no land on the whole planet), and you want to know if the ocean goes on forever.  You drop a buoy anchored to the sea floor and then set your compass point for true north and head out some distance until you’re really far North, near the North Pole.  You then turn 90 degrees left and head out exactly the same distance (so you’re back at the equator), then turn 90 degrees left again and that same distance later you find the buoy!  Two 90 degree left turns, and you are back where you started (try it on a globe).  If you turn left 90 degrees you could do the trip again, forming an equilateral triangle with three right angles.  You can conclude that the ocean is not flat;  in fact you must have gone a quarter of the way around the planet each time.  It must be that you could fill the ocean with a finite number of buoys.

But now imagine that the ocean is flat and infinite:  no matter how far you sailed, you could never do an experiment like I described.  Any time you tried to make a closed path, the angles you turn would add up to 180 degrees (not 270, as in the example above).  Either the ocean has an edge, or the ocean is infinite;  it can’t be finite, unless it is so huge that you just can’t get far enough to detect the curvature.

The Universe is, as best as we can tell, “flat” (in a 4-D spacetime sense).  Either it is infinite, or it is finite but so huge we can’t tell the difference.  So we might get out of the infinite Earths argument by assuming the Universe is finite, but there’s no way we can ever tell if this is correct (even if we get so good at measuring curvature that we eventually find some, it could just be a local wrinkle, a swell in the cosmic ocean fooling us into thinking it’s curved).

babel.jpgThe next component is that it is random.  If I write out the number pi in binary, I get an apparently random collection of 1’s and 0’s that go on forever and never repeat.  I can also look at the binary representation of the Complete Works of William Shakespeare as a big string of 1’s and 0’s.  As I look at more and more bits of pi, then the probability that I will find exactly this string grows with every set of numbers I look at, if the numbers are truly random (which, I will note, they are not).  There must be some number of bits so large that the probability is so close to 1 that I will be satisfied that the Romeo and Juliet is, indeed, encoded in pi.  Of course, also encoded in pi is a version of Romeo and Juliet where all of the Capulets are named “Beavis”, all of the Montagues are named “Butthead”, and all of the swords are whoopie cushions.

This metaphor maps (almost) perfectly onto the Universe;  in fact, it’s already been done.  In The Library of Babel Jorge Luis Borges1 imagines an infinite library filled with identical rooms filled with identical books filled with identical numbers of pages with identical numbers of lines each with an identical number of characters from a set of 25.  The twist is that each book is filled with jibberish, and each book is different.  The implication is that the library in infinite (or effectively so) and that every possible book is somewhere in the library. Like the pi example, there must be Shakespeare in there somewhere.

But this is only true if the books are random and the library is infinite, or if the books are ordered (in a complex way, perhaps) and every single possible book is present (in which case the library could be finite).   Likewise, if the Universe is random and infinite then eventually you have to find Shakespeare (the actual guy, writing the actual plays) over and over, but if there is some underlying order then this needn’t be true.

Or, as Derek pointed out to me, it is fallacious to say “I have an infinite number of teapots, one of them must be orange” (or, more precisely here, “one of them is orange so there must be an infinite number of orange ones”).  This would only be true if the colors of the teapots are random and include orange as a possibility.

To get back to my pi example, the bits of pi are definitely NOT random, they just seem that way.  I don’t think anyone has proven that Shakespeare is or isn’t in those bits, and it may be unprovable for all I know.  Similarly, we don’t know that the Universe is truly stochastic in the way necessary for the infinite Earths argument to work.  (We know that it is fundamentally random at the quantum level, but we don’t know how the laws of physics might be different very far away from us — for example if c increases along the z axis, e along the y axis, and h along the x axis, then the Universe would be ordered in such a way that there is certainly no repeat of us anywhere).

I actually find the finite Universe to be a more satisfying way out.  If the Universe is finite in the spatial dimension, then infinity is a purely human concept;  there is nothing infinite about the Universe (or infinitessimal;  the Planck length means you cannot even infinitely subdivide things).  Yes, the Universe might go on “forever” in time in principle, but it did have a definite beginning, so at any point in the future the age of the Universe is finite, and the future is still unwritten (the Universe is not deterministic).

Science teaches us a lot;  but the fact that the Universe is very flat means the answer to the infinite Earths question one isn’t one of the things it can teach us (but it’s profound to me that it took an actual physical experiment to figure that out!)

[Image from this link to the Library of Babel].

1Borges was not unaware of this parallel. The first words of The Library of Babel are “The universe (which others call the Library)…”.

Infinite Earths?

Crisis1.jpgNo, I’m not writing about a comic book crisis.  Rather, an argument I heard once in an astronomy colloquium that has always bothered me.  I’m sorry I can’t find a link to a formal description of the argument, though I think Max Tegmark (who else?) has advanced various alternative versions of it, especially in the contexts of the “multiverse”.

Now there are lots of things that physics can teach us about philosophy: quantum mechanics demonstrates that the Universe is not deterministic (Bell’s Theorem, specifically, does this): if we “reran” the Universe again, the quantum mechanical dice would land differently and different things would happen.  But it is causal — relativity insists on this (that is, effects cannot precede their cause;  you cannot travel back in time and prevent your own birth (because you can’t travel back in time!)).  These are deep issues that have been argued for centuries (millennia?) but are (surprisingly!) actually answerable.

Anyway, there is another profound result that predicts that you are not unique, and that your nearly exact double lives on a planet far, far away.

The argument goes like this:  the Universe, so far as we can tell, is infinite in spatial extent.  We can only observe out to the “horizon” — basically the farthest light could have traveled in the age of the universe — but we have every reason to believe that it just goes on forever in every direction.  Now, whether this is really true or not is beside the point, because the argument is profound either way.

Now, IF the Universe is infinite, then one can argue that everything that could happen — that is, every event with a finite probability — will happen somewhere, eventually, and will happen an infinite number of times!  This is a consequence of multiplying a finite probability by an infinite number of tries.  It’s like arguing about the monkeys on typewriters banging out Shakespeare, eventually.

So, the argument continues, this includes events like planets just like ours in Solar Systems just like ours in observable universes just like ours with people on them just like us doing exactly what we are doing now.  There are, the argument goes, an infinite number of copies of you reading this blog post somewhere in the Universe.

This argument has always bothered me on a mathematical level (and not just a philosophical one), but I’ve had a hard time pinning down why.  Mathematically, we can describe a toy model for the “state” of the observable universe (and us within it) as a position in a very high dimensional phase space (6 dimensions for each particle: position and momentum).  Every causally separated region of the Universe occupies some point in this phase space.   So if we start populating this phase space with a point for each part of the Universe (an infinite number of them!) then it seems inevitable that there will be a point arbitrarily close to ours (that is, a region of the Universe arbitrarily similar to our own).

Is this obviously correct, or is there a big logical hole here?  The best I can do is consider that not every point in this big phase space is equally likely, and that in fact to arrive at a given point the universe must traverse an allowed path to it.  This makes a description of our universe more akin to a curve than a point in a high dimensional space.

Now, the cardinality of all possible curves is thought to be higher than the cardinality of points in a space.  That is, just as there are more points in space than there are whole numbers (weird, but true: a classic proof is called “Cantor’s Diagonal Argument“), there are also more possible curves than there are points in an infinite space.  Does this break the “infinite Earths” argument, or not?  I don’t know.  The space on a dart board is mappable to the space in the Universe, and an infinite number of darts thrown at it randomly will still have an arbitrarily large number of them arbitrarily close to an arbitrarily point on it (even though there are more places for darts to land than there are darts), so maybe not.

Anyway, it’s a profound consequence of a Universe with an open geometry (which we now think we live in), so it seems to me that it is worthy of a rigorous analysis.  Does anyone know of one?

Update: Julia Kregenow reminds me that the Universe might easily NOT be infinite. Because we think that “Omega total” is very close or exactly equal to 1, the question of whether the Universe is infinite or not may be unanswerable!  If it is finite, then there may be only one of you, otherwise, an infinite number.  My astro prof Dan Clemens taught us that there are only two frequencies in the universe: zero and infinity.  In the case of us, it’s either 1 or infinity, but it’s the same idea.

Old high school buddy and fellow exoplanet astronomer David Spiegel points out that the number of continuous curves is equal to the number of reals, so my argument about different cardinalities probably doesn’t hold.  I still want to know if the “infinite universe implies infinite Earths” argument is airtight, though.