Monthly Archives: October 2012

A New Jupiter Analog, and “Dispositive Null” in the Literature

My student Sharon Wang (王雪凇) has recently finished a big paper with lots of goodies in it. 

First of all, there is the announcement of a new Jupiter analog. It’s a bit on the massive side (at least 3.4 times the mass of Jupiter) but it’s on a circular orbit with a 7.5 year period around a star a bit cooler than the sun, HD 37605.  There are not very many of these in the literature, but they’re starting to pop out now that Doppler searchers have gotten old.
We actually co-discovered this planet with the Texas group, led by Bill Cochran, Mike Endl, and Phillip MacQueen.  This star was already known to host a 55-day period super-Jupiter;  in fact that planet was the first planet discovered with the Hobby-Eberly Telescope.  The Texas team had already discovered the second planet almost three years ago, and we noticed it soon thereafter.
We were trying to firm up the orbit with Stephen Kane as part of the TERMS project to detect the transits of long-period planets.  We were using Keck and HET velocities together, and noticed that there must be a long-period planet because there were large residuals to the published solution.

Screen shot 2012-10-29 at 10.01.51 AM.png

Stephen announced this at a AAS meeting where Bill was in the audience, and a collaboration was born.  The Texas team very graciously sent us all of their raw data so that we could do a robust joint analysis, and now it’s finally here.
Sharon worked very hard on formalizing our procedure for calculating robust uncertainties for orbital parameters, especially the transit time parameter.  Uncertainties for transit times cannot be accurately calculated from published orbital solutions (a point we describe in detail in this paper), and Sharon’s new code, called “BOOTTRAN” uses a statistical bootstrapping technique to determine orbital parameters and their uncertainties, including transit times.  BOOTTRAN is available for download along with RVLIN, our IDL code for fitting radial velocities.  Sharon also provides a thorough statistical description of how our bootstrapping works in an appendix.
Also in the paper is a thorough transit search by Gregory Henry’s APT array and Stephen’s team using the MOST satellite.  We were looking to see if our new solution for the inner planet allowed us to determine if the planet transits.  We find that the solution is good enough, and that the planet does not transit.
But we can’t just say we “failed to detect a transit”, because we didn’t fail:  we succeeded, at more than the 100-sigma level, in showing that there is no transit.  That’s the power of BOOTTRAN’s uncertainty estimates and MOST’s photometric precision.
So we have a “dispositive null” of non-grazing transits, and we said so in the paper, both in the title and in a footnote where we define the term.
Sharon also did a thorough comparison of BOOTTRAN’s uncertainties vs. the output of an MCMC dynamical model by Matthew Payne, which produced parameter uncertainties as well.  There are a lot of details and caveats, but the bottom line is that BOOTTRAN gives the right answers.
So fellow scientists:  you can now use the term “dispositive null” and cite its definition in the refereed literature.  You can now get accurate parameter uncertainties with all-in-one IDL code that uses and plays nice with RVLIN.  And there is a new multiplanet system on the block, one that contains a good Jupiter analog.

Benchmarks and Standards

I like to study benchmarks and standards, from velocities to wavelengths to stellar ages.

One of my interests is how binary stars and clusters help us set standards for steller astrophysics.  One of the difficulties for the lowest-mass stars (“M dwarfs”) is that their metallicity is very difficult to measure.  We normally measure the composition of stars (really, the relative abundance of “metals” meaning everything heavier than helium) compared to the Sun, or “metallicity”) by studying their spectra and asking how many metals it would required in what proportion to reproduce what we see.

But M dwarfs are cool enough that molecules form in their atmospheres, and molecules have very complex spectra that are very hard to model or measure in the lab.  This makes attempts to derive metallicity from M dwarf spectra very hard.

interacting-binaries.jpg

The problem has been tackled from many angles, mostly by comparing binary systems where one star is a more massive, Sun-like star and the other is an M dwarf.  Since binaries presumably formed from the same cloud of gas, the metallicities should be the same, so we can measure the metallicity of the more massive star, then look to see what an M dwarf spectrum at that metallicity would look like.  This approach was followed by Barbara Ayala-Rojas for her dissertation work at Cornell while I was there (using K-band spectral indices), and also by Penn State’s own Ryan Terrien and Suvrath Mahadevan (using H-band indices).
Now, Ryan, Suvrath, and other members of their group have applied this method to an important benchmark M dwarf with a white dwarf companion,  CM Draconis.  Please hop on over to his blog and take a look at the great work they’ve done!  The bottom line is they have significantly revised the metallicity of this important system and that our best models of M dwarfs…  still don’t make any sense.
[Image Credit: P. Marenfeld and NOAO/AURA/NSF]

Trendy Companions Set the Standard

When I was applying for postdoctoral fellowships, I had two major research plans I wanted to follow.  One involved stellar magnetic activity, and the other involved following up our “trend stars” at the California Planet Survey.  The idea is that we have been following hundreds of nearby stars at Lick and Keck with precise Doppler measurements for decades now, and some of them show constant accelerations.  In most cases, this is due to the presence of a very distant, high mass companion tugging on the star.  

These accelerations show up as radial velocity “trends” — they appear as a linear function of time, sometimes with a slight amount of curvature.  These are essentially small segments of sinusoids (or more complex Keplerian curves) from the high amplitude, long-period orbits of binary companions.  Most of these companions are faint stars:  they are in a “sweet spot” where they are massive and close enough to induce detectable accelerations but not so massive that they are bright enough to contaminate our spectra.  In some cases we actually know about the star because we see it as a companion in guider images when we observe the star; in most cases the star is either too faint or too close (or both) for us to know much about.
Adaptive optics allows us to search more carefully for these very faint, very close companions and actually do photometry on them.  This was the case for the binary star system HD 126614 AB, where the A component has a planet orbiting it, and the B component induces an overall, linear trend to the radial velocities adding to the planetary signal.  David Bernat at Cornell (at the time) used AO at Palomar to pick out the B component, an M-dwarf star, and the combination of photometry and the observed acceleration allowed us to constrain the actual physical orientation of the B component with respect to the A component in this paper.
In the cases where the companion is a late-type (M dwarf) star or brown dwarf, it is really, really interesting.  Brown dwarfs are “stars” that are too small to fuse hydrogen, so they tend to be very dim but they can be imaged in the near infrared when they are not too old.  Finding middle-aged M dwarfs or brown dwarfs to test models with is hard because they tend not to be in their birth cluster, so you have to look all over the sky to find them, and then they don’t have any context to tell you how old they are or what they are made of or how massive they are.  A brown dwarf orbiting an ordinary, middle-aged star, though, probably shares its age and composition with that star and can be “weighed” by measuring the size of its orbit, which means that it can serve as a benchmark for similar objects in the field.  
Unfortunately for me, the panels that reviewed my fellowship applications did not see the deep wisdom in this approach.  Fortunately for astronomy, though, Justin Crepp (formerly a postdoc at Caltech and now a new faculty member at Notre Dame) has picked up the torch I set down to pursue other things, and has inaugurated a program to do exactly this.  The results have been great.  Last year he determined the three-dimensional orbit of a brown dwarf orbiting the star HR 7672 in this paper, yielding a dynamical mass for the object.  This makes it an important benchmark for brown dwarf models.
Building on this success is the TRENDS (the TaRgetting bENchmark-objects with Doppler Spectroscopy and imaging) survey to extend those results to as many late-type stars and brown dwarfs as possible.
Screen shot 2012-10-15 at 11.26.02 AM.pngThe first paper in the TRENDS series hit the astro-ph this past week and illustrates how well the survey can find benchmark M dwarfs.  This is allowing us to complete the census of multiple-star systems in the solar neighborhood, and providing lots of late-M star benchmarks to compare with field stars used in planet hunting programs like ours at Penn State.
Check it out, and keep an eye out for more results from TRENDS!

Grad student work hours

There is an important conversation going on on astrobetter here about how faculty communicate work expectations to their students.  I recommend John Johnson’s post and Julianne Dalcanton’s post on Cosmic Variance as follow-up reading. 

My comments are in the astrobetter comments section.  The bottom line is that I see what the faculty writing a letter to “buck up” their underachieving grad students were trying to do and why, but among their mistakes were misremembering their own lives as grad students and attempting to enforce a similar experience on all of their own students, without respecting those students’ career goals.

Update:OK, it’s still awaiting moderation.  I reproduce it below (I didn’t want to do this before because I don’t want to fragment the conversation.  To that end, please put any comments onto astrobetter or the comments section of another blog following this story).

I’m glad we’re having this discussion, because I think a lot of faculty read that letter and think, for the most part “geez, I wish we could say those truths to OUR students!” Let me share why I think that is, and what’s wrong with that approach. 

I think that the whole 80-100 hours per week number comes with a substantial dollop of “in my day we walked to school in 10 feet of snow uphill both ways.” I’m sure that I put in 80-100 hours occasionally, if you count mealtime; and if you count travel time and observing time then observing runs can certainly put you up to that number. But few students put in 100 hours of actual work in a week, and even then only rarely. I mentally shifted that number to “50-70 hours per week” of actual research work when reading the letter. 

 I look through the tone and details of this letter and see misguided faculty actually trying to help (is it a sign of assimilation by the tenure-track Borg that I sympathize at all?!) They ask the students to come to them with their problems, they explain that however tough the audiences are internally they’re tougher outside, they want the students to be realistic about what it takes to be like them. These are laudable goals, in principle. 

 But the last one is the problem: the underlying assumption of the letter is that students should be more like them, striving to maximize their chances at prize fellowships and tenure-line positions at Prestigious U at arbitrary personal cost. 

 Yes, it’s It’s incontrovertibly true that of otherwise identical students putting in 40 and 80 productive hours per week, respectively, the 80 hr/wk student will have a better cv and be more employable. Pointing out that the 60-80 hr/wk students are your primary competition for the “best” jobs is perfectly true. But that depends on what you think the “best” jobs are, and whether you actually want to do what’s necessary to get them. 

 If a student is productive at 40 hr/wk and happy with that, then they should be encouraged to maintain that pace with their eyes wide open regarding the likely jobs that will be available to them on the other side, according to their personal productivity. After all, 40 hours per week of actual, hard, no-goofing-off work can be a lot more productive than 80 hours of stressed-out, tired, procrastination-filled drudgery. 

 I think this is at the heart of Kelle’s excellent follow-up thread: what DO we want? I think we want professors to acknowledge and celebrate that some students are happy NOT to sacrifice their mental, social, and even physical health for the best shot at the most “prestigious” academic positions. We should be supporting them in helping them follow the path they DO want. 

 This goes hand-in-hand with the false problem of the “overproduction” of PhD astronomers. PhD astronomers have one of the lowest unemployment rates in the country; there are not too many of us for the economy, there are just too many of us for the Academy. By refusing to denigrate, and in fact celebrating those who seek to apply their skills to job tracks unlike their PhD advisers, we solve this “problem” and improve the job prospects of astronomers everywhere.

The Prehistory of Exoplanets

A quick entry:  Scott Gaudi and I have been writing up a chapter on exoplanet detection methods for quite a while now, and it has finally been finished.  Paul Kalas is helping us edit this chapter for “Planets, Stars, and Stellar Systems” (editor in chief is Terry Oswald, Springer is publishing).

We have our version of the chapter (not exactly the book version, but close) available on astro-ph: arXiv:1210.2471v2.  We go through how various detection methods work, give some numbers about their difficulty (“the magnitude of the problem”), and quantify their detection limits as functions of planet parameters using common notation and assumptions (Scott did most of the heavy lifting there).
I find Chapter 1 really useful;  I regularly use it as a reference for the basic equations of orbital motion (fans of Wright & Howard 2009 will recognize the notation and basic plan).  My favorite part of the chapter, though, is the “pre-history” of exoplanet detection, in a subchapter called “Early Milestones in the Detection of Exoplanets”  There were no fewer than 5 exoplanets detected and published prior to 51 Peg b;  can you name them?  

Science at the Speed of Twits

It’s been an interesting few weeks.

Michael Cushing was here at Penn State giving a talk on the discoveries of Y dwarfs with WISE (not to be confused with Penn State’s own Kevin Luhman’s discovery of a Y-dwarf-white-dwarf binary.  Say “The WISE Y dwarf, not the white-dwarf Y dwarf” 10 times faster). 

After his talk, I asked him if all of them had measured parallaxes.  Mike said that they weren’t done with the analysis, and I clarified that I wanted to know not if any hadn’t had their parallax measured yet, but if any had measured parallaxes consistent with zero (that is, a dispositive null detection of a large parallax).  He said, “no”, and I told him that if he came across one, to call me.  A 300K object bright enough to be seen beyond several parsecs must be something else.

I mentioned to Steinn Sigurdsson in the stairwell that WISE had just completed humanity’s first sensitive search for Dyson spheres.  He, being in the midst of multiple proposals to the New Frontiers in Astronomy and Cosmology Research Grant Program, due within hours, suggested that I put in a pre-proposal.  I told him I would if he did the paperwork, and we wrote up a one-page pre-proposal and the names of a few potential referees we thought were crazy enough to like the idea but not crazy enough to steal it (note to our referees:  that’s intended as a sincere complement).

We were invited to do a full 10-page writeup plus 4-page executive summary, and we leaned on Matt Povich to help us with the actual numbers and practicality of how such a search would work.  I was as shocked as anyone when we learned a few weeks ago that we had won. Another  winner is my PhD adviser, Geoff Marcy:  the apple doesn’t fall far from the tree!  In fact, it was a conversation with Geoff when I was a graduate student that first got me thinking about actually using all-sky infrared surveys to search for Dyson spheres.  That and assigning Kardashev’s 1964 paper in a seminar I taught is what primed me for this project.

The John Templeton Foundation has been slowly working up a press release in front of the formal awards ceremony this Friday and Saturday, and we thought about but were to busy to have Penn State put out a simultaneous release (other institutions were prompter).

Anyway, I presented our research project here at Penn State the Friday before last, in a well-attended internal talk entitled “Keeping Up With the Kardashevs” (blame Steinn for the title).

  During my talk Derek Fox and Steinn Sigursson tweeted many of the contents, as they are wont to do (@steinly0 and @partialobs, for the other twits amongst you; I’m @Astro_Wright).  In the ensuing twitstorm, a follower of a follower of Derek’s, that happens to write for the Atlantic, picked up on this, and within 3 hours of my talk I had an interview request.  

The Atlantic article went live shortly after the Templeton press release on Thursday, coincidentally while I was being hosted by none other than Mike Cushing at the University of Toledo to give the colloquium (small world!  He was also my lab-mate at Boston University when we were undergrads).  Before other institutions could say “press release” I had, despite having no release of my own:

  • The Atlantic article
  • a bunch of retweets and blog posts about the Atlantic article (favorite title:  “Astronomers assume aliens are more open to solar power than Mitt Romney“), 
  • an interview on Canadian Sports Radio  (I was a bit worried about a Sandusky-scandal ambush, but it was all business.  I did, however, understate the distance to galaxies by a factor of 1000.  *sigh*) 
  • a fresh article at The Register (apparently I’m an “astroboffin”)
  • lots of Facebook entries and Tweets by me, Steinn, and Derek
  • another interview request with Newstalk, Ireland’s national talk station
  • and, most importantly, I made the front page of Slashdot.  

So, which do you think drove the most traffic to this blog?  According to Google Analytics, the winners are:

  1. slashdot.org (237 visits; bounce rate 84%)
  2. The Atlantic (76 visits, bounce rate 51%)
  3. Steinn’s blog (66 visits, bounce rate 45%)
  4. the FaceBook (39 visits, bounce rate 76%)
  5. The Register (14 visits, bounce rate 79%)

Note that in these numbers, Twitter links do not get labeled as such, so I can’t quantify which of the below originally came from our tweets.  Also, much of the Slashdot and Atlantic traffic may have originated by way of Steinn’s blog and Facebook, so those numbers are probably underestimates of the importance of those sources, as well.

My bottom line: the whole “new media” thing is important.  The old press release model is not enough.  I recommend “Marketing for Scientists” (I got a synopsis long after starting this blog, but before I started Tweeting; Kuchner’s analysis is consistent with my experience.  Another small-worldism:  Kuchner is talking about his book both at Toledo and here at PSU soon). 

The other important takeaway: Slashdot, as ever, has a lot of users but a very low signal to noise ratio, both in terms of content, and in terms of the quality of the links it engenders.  Steinn’s blog is apparently just as effective at driving real traffic (note: N=1), so make sure to be nice to him and get on his radar!  :)   

Finally, I get to present our research proposal to the John Templeton Foundation on Saturday.  I think Freeman Dyson may be in the audience; I hope it doesn’t feel like my qual all over again, to present this stuff with him right there.  

Also, I think that I’m going to change my title slide for that talk. :)

Update:  Eric Jensen points out that I didn’t finish my thought about visits and bounce rates. Steinn’s blog generated just as many “quality” visits, meaning people that not only clicked the link but stuck around to visit at least one other page on the blog.  The “bounce rate” is the rate of clicks that do not result in such a visit.  So Steinn’s 66 visits at a 45% bounce rate implies 36 “quality” visits, while slashdot.org’s pathetic 84% bounce rate meant that only 38 of their 237 visits were worthwhile.  Since Steinn undoubtedly generated additional visits through Facebook (and also by linking to the Atlantic and slashdot on Twitter), he was probably the single most important driver of traffic to my blog.

 

Waste Heat, part III: Climbing Kardashev’s Scale

In my last entry I discussed Kardashev’s scale of civilizations and Dyson’s insight into a completely general method of detecting distant alien civilizations.  I gave a talk on all of this last Friday and before the talk Eric Feigelson mentioned to me that Nikolai Kardashev is alive and active:  in fact he is currently the deputy director of the Russian Space Research Institute at the Russian Academy of Sciences at age 80.

sagan.jpg

 Freeman Dyson, too, is an active scientist at 88 at the Institute for Advanced Study at Princeton.  I heard Dr. Dyson speak when I was a graduate student at Berkeley (on the “garbage bag” theory of the origin of the cell, if I recall) and I may get to meet him in person next week in Philadelphia.

The next scientist in my tale left us far too early.  Carl Sagan took Kardashev’s scale and extended it to include fractional numbers by noting that the fraction of all sunlight that strikes the Earth is (very) roughly the inverse of the number of stars in the Galaxy.  That is, 

Screen shot 2012-10-01 at 1.18.11 PM.png
This allowed him to redefine Kardashev’s scale slightly by defining:
Sagan extension II
On this scale, a civilization with energy supply of 10 billion MegaWatts (which is roughly 5% of the total incident sunlight on the Earth) would have K=1.  Humanity’s current energy supply is about 10 million MegaWatts, or K=0.7, and if we collected and used all of the incident energy on the Earth we would have K=1.12.   
A civilization that collected and used almost all of the radiant energy of a star like the Sun (4 x 1020 MW) would have K=2.06, and one with 100 billion solar luminosities of energy supply would have K=3.16, roughly consistent with Kardashev’s scale.
It is interesting to note that humanity’s energy supply has doubled in the last 30 years.  At this exponentially increasing pace, we will achieve K=1 in 300 years, and have an energy supply equal to the incident sunlight on Earth in 400 years.  At this point, we will have doubled the Earth’s pre-industrial mid-infrared waste heat signature.  In fact, this will be a new form of global warming that has nothing to do with greenhouse gasses:  just by using energy for our own needs we will significantly warm the planet with the waste heat from our computers and electric cars and phones.  
This gives a sense of how quickly we are approaching our Malthusian limits on energy:  unless we start colonizing space, we will hit hard energy limits in just a few human lifetimes.  As I wrote before; every other Malthusian limit, from food supply to fresh water to material, we can, in principle, overcome with technology and energy expenditure, but energy itself will eventually limit us, and unless we want to seriously heat the Earth we’re going to have to move to space to find the energy and waste-heat emission surfaces to keep expanding our economic activity.  Given how poorly coordinated most of our energy use is (even when we know it’s heating the planet, we continue to burn more and more fossil fuel), it seems very typical of our species that some fraction of our population will always be looking for new sources of energy. Viewed this way, the question is not whether we will ever be able to build a Dyson sphere;  the question is why it isn’t inevitable!  

trantor9a.jpg

I like to generalize the Kardashev scale the way that Zubrin did in 2000 (in his book Entering Space, I think), not by the actual energy use, but by the extent of a civilization.  A planet-wide species (like ours) would be a “K1”, or civilization of the first Kardashev type.  A solar-system-wide civilization would be a “K2”, and this would include everything from a thriving moon colony to a full-on Dyson sphere.  A “K3” civilization would have colonies between the stars.
I should note that the transitions between these types will actually be very quick, cosmologically speaking.  Consider a space-faring civilization that can colonize nearby stars in ships that travel at “only” 0.1% the speed of light (our fastest spacecraft travel at about 1/10 this speed).  Even if they stop for 1,000 years at each star before launching ships to colonize the next nearest stars, they will still spread to the entire galaxy in 100 million years, which is 1/100 of the age of the Milky Way.  In other words, if you watched a galaxy in time-lapse from its formation to the present day in a movie 1 minute long, it would go from having a single K2 civilization to having a K3 civilization in under 1 second.  
So if we scan the heavens for galaxies with aliens, we should not expect to find many that have only a few aliens:  they should have either no K2’s, or a K3.  Interestingly, we can apply the same logic to the Milky Way:  if there are aliens in the Milky Way, it is very unlikely that we would have come of age in an era where they were in transition between K2 and K3.  Either the Galaxy is filled with spacefaring aliens, or we are the first.  So we should either expect the galaxy to be filled with Dyson spheres, or totally empty of spacefaring life.  This makes a null detection of Dyson spheres very interesting! (This is also a rephrasing of the so-called Fermi paradox).
OK, practical numbers for Dyson spheres will have to wait for another entry.  Next time, I’ll look at the problem of extinction and perhaps why aliens might not have any waste heat to detect.
[Image credits: Carl Sagan from Wikipedia, “Trantor from Space” by Slawek Wojtowicz]