Best*. Photometry. Ever. V: “Explosive Debonding” and, What is the Best Photometry Ever?

Last time, I showed the great results Ming got at Palomar using the holographic diffuser –nice, stable stellar images in marginal seeing and with a sub-optimal diffuser.  But that was way back in March –where’s the progress?

Well, we had hoped to post the results of the new diffuser, but we’ve had two problems since then.  One is weather — Ming seems to have the worst weather luck at Palomar, and so we haven’t had any successful observations since Fall.  The second is explosive debonding.  More specifically, this email Ming forwarded to me from Palomar:

I regret to advise you [never a good start — JW] that our P200 NIR camera WIRC has suffered a catastrophic failure of its Hawaii-2 IR array…

The specific failure of the WIRC array was explosive debonding & separation of the semiconductor from its substrate.  This is apparently a well-known failure mechanism of the H2 array family, and we believe it is most likely due to thermal cycling over the ~ 10-year lifetime of the camera.  We are evaluating our operational procedures to mitigate the likelihood of this particular failure in a repaired WIRC (e.g. minimize thermal cycling as we do with TSpec) going forward.  

Or, as I put it on the social media:

FB1.png

which got these responses:

Screen Shot 2014-05-01 at 1.56.19 PM.png

Yes, it’s nice to be working with a well-staffed and -funded observatory like Palomar!  It’s still not clear if we’re going to get an engineering grade old H2 array or a newer RG array.  If they install the old array, Ming will have to re-characterize the non-classical nonlinearity in the array to achieve this sort of precision again.  Eventually, we hope to be able to move over to the RG detector, which would mean that the photometry WIRC can achieve will now be even better than we had promised (but also that all that work on the nonlinearities won’t be needed any more — good problems to have, I guess).


So, back to the original point of this thread: 110 ppm NIR photometry from the ground.  Is this the best photometry ever?  

Measuring the quality of photometry can be done in lots of ways.  

1) One way to think of it is in terms of relative error in the flux received; basically, if the true flux is constant at F, then how much does your measurement of F vary from observation to observation?  If your sources of noise aren’t too red, then things will improve with the number of photons you receive (ideally as sqrt(N)), so the more photons you have the better your precision.  This means if you observe a bright star on a big telescope for a long time you have an advantage.  This is the regime where we are:  a Ks=10 star on a 5-meter observed for 3.5 hours.  The scatter in our 30-minute bins is 110 ppm — and by this metric I think 110 ppm is the best NIR ground-based precision ever.  This is comparable to Bryce Croll’s WASP-12b (Ks=10.2) JHKs band photometry, which had very high precision as well. Their ~150-point bin (corresponding to ~35min) reached ~170ppm with the CFHT a 3.6m telescope.  Jacob Bean’s group has achieved ~150ppm in K band (in one of their spectral channels) using the 6.5m Magellan multi-object spectrograph on a Ks=10.5 star, although it’s unclear what their 30-min scatter was.

The best optical measurement I’m aware of by this metric is Daniel Potter’s achievement of 30 ppm on HD 209458 in 20 minute observations with PEPPER (how do I remember this obscure AAS abstract?  I gave my dissertation talk in the same session!).  Knicole Col�n’s 62ppm on HD 80606 from GTC is the next best I know of.

2) Another way to do it is to measure the quality of the technique in a more normalized way: if someone can achieve 120 ppm in 15 minutes then their technique is better that what we have achieved, even though 120 ppm is more than 110 ppm, because if they went for the full 30 minutes they’d certainly get below 110 ppm.  This is a “fairer” way to judge things.  

By this metric, one of the best ground-based observations ever (but in the optical) is John Johnson’s photometry of HD 209458 in 2006 using the orthogonal parallel transfer imaging camera (OPTIC) on the UH 2.2m.  The idea here is very similar to holographic diffusion, but at the detector instead of within the optical path:  you shift the charge in the CCD around to smear out the starlight into a shape of your choosing (John chose a square).  It worked really well: John got 470 ppm in 1.3 minutes.  If you bin that down assuming Gaussian noise, that corresponds to 90 ppm in 30 min.  

Of course, this metric still incorporates the stellar brightness and telescope into the problem.  

3) The true test of a technique would compare to the photon limit.  As a fraction of the photon limit, I think the best you can do is count photons with very low background.  High energy / X ray astronomy might approach this limit in some sense, because when you only have a few detections your photon noise from the source dominates.  Plus you’re doing absolute photometry.  Of course, the challenge isn’t to find a source with 3 counts per hour and negligible background, it’s to come similarly close to the photon limit in the limit of LOTS of photons.  I’m not sure what the record is here — I imagine of curve of how far above the limit you are as a function of the number of source photons you detected.  

The record by this metric is probably something like Brown and Charbonneau’s HD 209458 observations with the Hubble Space Telescope, where they got 110 ppm in 80 seconds (so, about 4 times better than John’s observations of the same star with a similarly-sized telescope and similar exposure times).  The photon limit for these observations was 80ppm, so they were only 38% above the photon limit!

But their telescope was in space.  That’s cheating.


OK, that’s our story (so far!).  

David Hogg has some nifty ideas about truly optimal photometry, and we’ve swapped data and “secret sauce” with his group to see if there are algorithmic improvements to be had over aperture photometry.  We’re still about 2x the photon noise limit, so there is room to run, no matter how you measure our photometry.

I invite readers to submit their nominations for the Best. Photometry. Ever., along with which of the 3 metrics I’ve used they are “best” in (or suggest a new one!)

2 thoughts on “Best*. Photometry. Ever. V: “Explosive Debonding” and, What is the Best Photometry Ever?

  1. Aleks Scholz

    If the goal is a ‘fair’ comparison, the third metric is clearly preferable, because it allows for a comparison across telescopes. In addition, one has to scale to the same altitude and airmass, to account for scintillation differences. The other factor that is missing is the time scale – systematics become much harder to correct in monitoring runs over days or weeks rather than hours (which are most of your examples). I could go on. All this makes it difficult to compare. With our 0.94m James Gregory Telescope in St Andrews we can get 1mmag in about 2 minutes for 10mag stars over several hours. That’s at sea level. Scaled to a more useful altitude, 8m aperture, and 30 minutes, that would be, well, damn close to a world record.

Leave a Reply

Your email address will not be published. Required fields are marked *