Monthly Archives: April 2014

Best*. Photometry. Ever. III: Prototyping Holographic Diffusers

In the first installment of this series, Penn State Research Associate Ming Zhao presented “the problem” of precise NIR photometry.

In the previous installment, we made a detour into the not-so-distant past and revealed how fate had intervened to give Suvrath Mahadevan the idea to install holographic diffusers on a spectrograph that needed to be blurrier.  Survath recommended we try the same thing for photometry.  

In this installment, Ming and I describe the efforts of many people to prove the diffuser could actually do what we needed.

We immediately looked into this possibility with Suvrath’s consultation, and searched for places that make diffusers and contacted all of them. Unfortunately, none of them made diffusers that fit our specifications, but luckily, some of them do have development capability.  After a few months of searching and back and forth communications, we were referred to a vendor that sent us some sample diffusers to further investigate the feasibility of Suvrath’s idea for photometry. 

We tested the samples on sky with Davey Lab’s rooftop telescope with the kind help of Prof. David Burrows and Lea Hagen, a dedicated and hard working graduate student who was in charge of the telescope back then.  We inserted the diffusers into the optical path and imaged some stars, but the results were disappointing — we did not get the nicely diffused images we had hoped for.

We tested the samples’ performance in Suvrath’s lab with the help of Sam Halverson, a bright graduate student works on instrumentation in Suvrath’s group.  Those tests were critical, as they allowed us to learn that the diffuser will only work when the beam sizes of the stars are much larger than those of the diffusing sub-structures in the diffuser (i.e., collimated beams that fill the diffuser are the best).  

Screen Shot 2012-09-04 at 3.08.37 PM.png

Test images of laser light through the prototype diffuser (left) and without the diffuser (right) taken in Suvrath Mahadevan’s lab at Penn State.  The diffuser successfully spreads the light out over many pixels, just like we want.

We started thinking about diffusers for the MINERVA array, but found it would be hard there because the filter wheels are not in a collimated beam, and the result would be a lot like our rooftop tests.  But the lab tests showed us this would actually work great (in principle) on the Wide-field Infrared Camera (WIRC) at Palomar (where Ming does his secondary eclipse work), since the beam there is collimated at the filter wheel, then re-imaged onto the detector.  So Ming discussed this with his colleague Prof. Heather Knutson at Caltech.  She agreed that we should put a diffuser at the Palomar 200-in, and none of what happened next would have happened without a big push from her. 

We still had to convince Caltech to put one of these things into WIRC, and we still weren’t sure that it would actually work on sky.  Fortunately, there was a Palomar Science Meeting coming up (thanks for advocating me to go, Jason — MZ).  Heather managed to get Ming on the schedule to give a long talk about the great secondary eclipse work they had been doing together.  Ming ended his talk with the results of our diffuser tests and argued that Palomar would be much more efficient at this work with a diffuser installed (no more praying for perfectly bad seeing!).  Heather followed later up with Palomar administrators and convinced them that this was the right way to proceed. 

Then, over the next several months, Heather led the effort to communicate with the vendor, spent large funds on the development and purchase of the diffuser, and coordinated with Caltech’s staff on the implementation. We worked closely with her to ensure the quality and the specs of the newly developed diffuser met our requirements, since the opening angle we needed was at the manufacturing limit.  She also coordinated with Eugene Serabyn’s team at JPL to test the final product at their Palomar testbed (more on this next time). Finally, by the end of last year, a diffuser was delivered and installed on one of WIRC’s filter wheels, and everything was ready for on-sky test, thanks to all the teamwork and efforts of the Palomar staff and everyone involved.  

But of course, it couldn’t be that easy…  Next time:  first results on-sky

Best*. Photometry. Ever. II: Fate Intervenes

In part 1 of this series, Ming Zhao outlined the problem:  how to get a nice, uniform spread of light across lots of pixels when trying to do sensitive infrared photometry?  We discussed lots of options, but it was a conversation with another assistant professor here at Penn state, Suvrath Mahadevan, that showed us the way.  The story starts way back in Suvrath’s grad student years…
[queue wavy flashback effects…]
suvrath.gif
Vintage photo of Suvrath, then a young graduate student in snowy Florida
Suvrath was at the University of Florida, working with his thesis adviser Jian Ge (both of them formerly of Penn State, actually) on the Kitt Peak Exoplanet Tracker.  The Exoplanet Tracker was an externally dispersed interferometer (I worked on one, too — TEDI, at Palomar, with James Lloyd, Dave Erskine, and Jerry Edelstein, who was also my wife’s thesis adviser… wait, I’m getting off track).
Anyway, ET was a planet-finding instrument that measured precise radial velocities by passing starlight first through an interferometer then through a low- or medium-resolution spectrograph.  The problem was that while the interferometer was great for precise velocities, the fringes it produced in the spectra were terrible for instrument calibration.  Normally, one sends spectrally featureless light through a spectrograph to get a “flat field” and measure the instrumental response.  They had a build a much more stable version of the interferometer than the one run by a piezo, but the new version couldn’t be jiggled around to wash out the fringes (sound familiar?).  Without flat-fielding, the project wouldn’t work, and they couldn’t take flat fields.
What’s worse, they wanted to do do wavelength calibration to take out the effects of a tilted slit using an arc lamp, and the fringing of the interferometer made this very had, as well.  The fringes were the whole secret sauce that made the ET and MARVELS projects work at all, and yet they were confounding all of the normal ways one calibrated instruments.

Edmund_Optics.jpg

Suvrath had been working on the problem to no avail, and decided to put it down for a while and switch gears.  As he turned and got up, a copy of the Edmund Optics catalog fell to the ground and opened up to a random page.  Survath picked it up and…
light poured down from from the heavens, choirs sang, a gentle breeze tousled Suvrath’s flowing locks as his eyes widened and he saw the very answer he sought on the open page… 
A holographic diffuser.
A holographic diffuser is an optic that scrambles the directions of light that passes through it.  Sort of like frosted glass makes everything behind it (very very) fuzzy, these do the same except they don’t block or reflect any the way frosted glass does, so you don’t lose any light.
If you put a holographic diffuser in the light path of a camera or spectrograph, then light that normally would come to a nice sharp focus will get diffused out into a Gaussian, or tophat, or some other shape that the designers choose.  Up close they look like clear wafers with little worm shapes etched into them — I think that the direction light gets redirected is basically a function of what part of the diffuser it hits.
So, Suvrath and Scott Fleming, another student of Ge’s, installed a holographic diffuser in ET (and, later, MARVELS) to make “milky flats” — these diffusers washed out the fringes caused by the interferometer and allowed them to calibrate their instrument.  They also allowed for “non-fringing” arc exposures to be taken through (and despite of) the interferometer.
Problem solved.  Project saved.  Magic book of fate to the rescue.
Or, I should write, problemS solved, because years later in a conversation about Ming Zhao’s project, Suvrath would suggest these diffusers to solve the problem the telescope that wouldn’t defocus enough, and the seeing that was never quite bad enough.  If we installed a holographic diffuser at Palomar, it might blur out the stars just the way we needed, and in a very stable, predictable way.
Maybe.  These things have a way of being much more complicated than you think they should be.  Stay tuned for the next installment

Best*. Photometry. Ever. I: The Problem

I was having coffee with David Hogg, and he asked, essentially, “what’s so hard about photometry?”  “why can’t we do Kepler from the ground?”.  I gave the usual slew of answers, but he wasn’t sure if this wasn’t fundamentally a (solvable!) data analysis problem.  I told him about Ming Zhao’s efforts at Palomar to get outstanding photometry for secondary eclipse work on hot Jupiters, and that I thought we might have achieved the highest precision ground-based O/IR photometry ever at about 3x times the photon limit on a bright star with a 5-meter.

Since then, our two groups have been exchanging tidbits and ideas and data, trying to produce the best possible ground-based photometry.

In this first installment of a series, Penn State Research Associate (and NASA Origins of Solar Systems award recipient) Ming Zhao guest blogs “The Problem”:

How to get ultra-high precision differential photometry?

The key is to understand the systematics of your measurement and calibrate them well, and/or keep your instrument as stable as you can so that the instrumental systematics don’t affect your differential measurements. The latter was essentially what the Kepler spacecraft was doing until its reaction wheels failed. Before that, Kepler had achieved <10 parts-per-million (ppm) precision on bright stars and had detected thousands of small planets by keeping its pointing extremely stable over time. 

Similarly, by calibrating the instrumental effects in addition to highly stable pointing, astronomers pushed the limits of both the Spitzer space telescope and the Hubble space telescope to allow high precision measurements of the dauntingly tiny signatures from exoplanetary atmospheres. [For more details on Ming’s secondary eclipse work, how it works, and how it uses precise photometry, look at these links— JTW]

Because of the Earth’s gravity and atmosphere, it is a lot harder to do that from the ground. Gravity makes it difficult to keep the pointing extremely stable with a gigantic telescope, and induces flexures to optics that cause astigmatism. The atmosphere makes the point-spread-function highly variable with time. These effects are usually very tiny and do not affect most astronomers, but are disastrous for high precision measurements of exoplanetary signals.

To address these issues, one common approach astronomers take for ground-based observations is to defocus the telescope so that the PSF is spread out over many pixels. This is key to mitigate the difficult-to-calibrate inter-pixel variations of a detector, as it makes the instrumental systematics more Gaussian-like. It also has the advantage of significantly improving the observing efficiencies since it takes longer to saturate the detector with a defocused image.

Like other groups that were carrying out this type of study, we were also facing these issues when we started using Caltech’s Palomar 200-in Hale telescope to measure thermal emission from exoplanets. After improving the guiding stability of the telescope, we were able to get precision of better than 200 ppm using the defocusing approach under the best conditions. However, due to astigmatism, the defocused images always have bright spots that cause highly time-correlated systematics and also damage the observing efficiency.

Screen Shot 2014-04-18 at 9.57.09 AM.png

As a result, our best precisions could only be reached sporadically when the atmospheric seeing is consistently bad for a period of several hours. This basically means that our observations were uncontrollable in some sense and were based almost completely on luck — we cross fingers to hope for the worst, stable seeing every time (quite an opposite to other astronomers!)

This is unsettling. So Jason and I brainstormed a few times to find ways to address this problem. We thought of dispersing the light, creating artificial dome seeings, better calibrating the optics, and Jason even thought of shaking the camera in some patterns [It turns out engineers really really really don’t like it when you suggest deliberately shaking instruments — JTW] But none of these approaches is simple and practical to implement since we didn’t have the flexibility to modify an existing instrument. Jason then discussed this with our local instrumentation expert, Prof. Suvrath Mahadevan. Suvrath inspired us with a brilliant idea…

More in the next installment.