Graduate student and astronomy writer


UBMS: Week 1

This blog post is a reflection on my first week as a mentor in the UBMS program, the second post in a series that will span the next few weeks. You can read my first blog post in this series that describes the UBMS program and my role within it here.

Last Tuesday was the first day of our program. We had a shortened class period with them since they had other introductory things they needed to complete for the larger UBMS program. We have two students in our astronomy program, a sophomore and a senior, both of whom seemed very excited to be learning about exoplanets. Student A (the senior) has been in the UBMS program twice before this, and worked on projects about cosmic rays and atmospheric emission. My co-mentor and I think that Student A’s background will serve her well this summer.

Student B (the sophomore) is a first-timer in UBMS, but has a great excitement for exoplanets. Student B seems to have done a lot of background reading about exoplanets, and came in with good prior knowledge about the specifics of the research project, but lacking some fundamental background knowledge about astronomy and physics. Making sure that the two students are on the same page all summer will be a challenge, given their different educational levels and previous experience.

One learning activity that I am very excited to introduce to the students is the idea of concept mapping. When you create a concept map, you identify the key topics, ideas, or skills that you have learned (nodes) and specify the connections between each of them. When done right, a concept map is an excellent tool for synthesizing information, gaining a view of the scope of a course, and identifying how seemingly disparate topics fit together. This is something I have never seen done in a physics or astronomy course, where oftentimes topics are presented piecemeal and the “larger picture” is often presented at the end as a “ta da!” moment.

At the end of each of our lessons, Michael and I are having the students create a small concept map for the concepts and skills they learned that day. That is the smaller scale review. Between lessons, Michael and I add their smaller concept maps to the larger, master concept map that we are iteratively building throughout the entire course. At the start of each following lesson, we present to them how what they learned last week has been integrated into a larger concept map. They get a visual representation of how their knowledge base is growing, and of how everything that they’ve learned is connected together towards a larger goal. Not only that, but they get the assurance that everything they will learn is relevant to the course goal, and not just side tangents that interest us.

We started on Tuesday with introductions for myself and Michael, as well as our two “faculty” (read: postdoc) mentors. We introduced the topic of the course, general expectations, and the like. We introduced the research question and had the students brainstorm what they would need to know to answer this. Having them come up with the course overview reflected the same process that Michael and I went through designing the course, and was a way for them to break down the big task in front of them into manageable pieces. The students appeared to value this exercise, as it gave them a picture of what the next few weeks would be like, and seemed to lessen the intimidation factor learning something new can bring.

When we mentioned to the students that we were having them take their own transit observations, they seemed excited to actually get to go observing (I think their enthusiasm will wane after experiencing the tedium involved). Whenever the weather cooperates, we’ll go observe a transit, and spend the wait time using the other telescope on the roof to look at pretty things like galaxies, nebulae, and star clusters.

We started the course for real last Thursday, with an overview of basic astronomy. We went through from the Solar System, to stars, to galaxies, and the Universe. We introduced the transit method, and did a brief overview of telescopes and CCDs. It was a lot of information for one day, but we assured them that we would go over the more important bits in more detail later. Our hands-on portion of the lesson had them being trained to use the 24″ telescope on the Davey rooftop. Our first potential observing night was only two days away, so they needed to be trained. No hiccups there, and while they really enjoyed getting to work with the telescope themselves (Michael and I kept hands-off), they got their first taste of the tedium of waiting for long exposures. Observing will be an interesting experience.

The first attempt at concept mapping at the end of Thursday’s lesson went alright, though they seemed a little hesitant at first. We pre-selected for them a few nodes they might want to include, and they added many of their own, as well. The connections they made needed a little modification, but overall they did very well at synthesizing the large amount of information we have them that day. Their final concept map for that day was more extensive than we had expected, and I’m very interested to see what they come up with in subsequent lessons.

This blog was supposed to be posted last Friday or Saturday. I realize that being late on your first in a series of blog posts is not a good thing, but it actually highlights one of the lessons I learned last week: always have your lesson plans done before the class starts! My co-mentor, Michael, and I had completed about 80% of of the lesson plans, course material, and project instructions beforehand, but that last 20% is very difficult to complete while you are simultaneously getting ready for class. We have mostly caught up by now, and I have definitely learned my lesson.

Also regarding lesson plans, I have learned that, just like in battle, no plan survives first contact. I am generally a very meticulous and organized person, and my lesson plans are no different. Michael and I created a course outline and a breakdown of the content and skills needed to answer the research question, and from there we created plans for each individual session. The lesson plans are very detailed, given that one of us usually writes them and yet both have to teach from them, and include some lecture, lots of active learning methods, hands-on demos, and actually working with their data. Given that the students are not only working with unfamiliar ideas and extremely exhausted from their intensive program, it’s very hard to predict how long each activity will take. I will have to be very flexible in the coming weeks to ensure that the students learn what they need and get the work done.

Overall, the first week was a bit hectic, but very awesome. Being able to work with interested students in a small group setting is much different than teaching a class of even 20 people, and allows for a more informal and personal approach. I’m excited to see what Week 2 will bring!

Until next week, dear readers.


Mentoring in the UBMS Program: Overview

This summer I decided to take on a new challenge, something that I hope will provide good experience in teaching and mentoring and help me decide if I want to take that sort of path after I defend my dissertation. The Upward Bound Math and Science Program at Penn State has been operating since 2001 and “is designed to strengthen the math and science skills of low-income, first-generation potential college students” (UBMS Website). During the summer session while the undergrads are away, UBMS hosts a Summer STEM Institute on the University Park campus to give students from underprivileged areas of Pennsylvania extra classes in STEM areas, plus classes in useful things like presentation skills, how to write a research paper, how to keep a lab notebook, etc. The students live and work on campus for around 6 weeks to take classes.

As part of the Summer STEM institute, the visiting high school students also get to choose a research project to participate in, complete, and present at the end of the Institute. My role in UBMS this summer is to lead one of the research “labs” with my co-mentor and fellow astronomy grad student, Michael Rodruck. This involved selecting a research question to answer, designing a project aimed at answering it, preparing and executing lesson plans to teach the students background material and guide them through the project, and being an overall mentor to the students in this real-life research experience. Our lab section is a part of the Summer STEM Institute’s Summer Experience in the Eberly College of Science (SEECoS) program.

Michael and I have been training and preparing since mid March to lead this lab section, which only meets twice a week. The research question we devised (under the supervision of our Astro department supervisors Drs. Kate Grier and Jon Trump and UBMS leaders) is How do we detect extra-solar planets using the transit method, and what can we learn about those planets from this method?” We chose the transit method specifically over other detection methods because 1) we can use the rooftop telescopes on Davey Laboratory to measure the transits of exoplanets hands-on (whereas, we cannot use those telescopes to make radial velocity, microlensing, or direct imaging measurements) and 2) we can use the publicly available Kepler light curves to get the students working with all types of planets.

I have also found in my public speaking experience that the transit method is the most widely known exoplanet detection method and is the most straightforward of the methods for novices to grasp. Given the extremely limited timespan in which we get to work with our students (only 10 lessons!), the transit method seemed like the way to go.

This teaching/mentoring experience will have a lot of firsts for me: my first time teaching high schoolers, my first time teaching a course on exoplanets, my first time designing the content for an entire course, my first time applying active learning techniques in the classroom. I decided a that a useful exercise for me would be to chart this experience through a series of blog posts. At the end of each week I will write a blog post reflecting on the experiences of the week, things that went well, things that didn’t, things  learned, things I would do differently next time. I will not be using my students’ real names for privacy reasons.

This will also be the first time I’m writing a blog like this: a live, self-reflection on my teaching experiences. I hope you’ll all stick with me as I stumble my way through the next 5 weeks. It will be an interesting experience, to say the least.


ERES Day 2 Session 4: Statistical Characterization

Our post-lunch session is chaired by recent PSU PhD, Dr. Benjamin Nelson. These talks are all related to characterization of exoplanet systems using statistical methods.

Systematics-insensitive periodic signal search with K2 (Ruth Angus, University of Oxford)

When the second reaction wheel of Kepler failed, it was recommissioned as the K2 mission. The issue with the K2 mission is that the telescope itself drifts slowly, and they need to fire the thrusters every so often to fix this issue. That mean that, as the star appear to be drifting across the CCD, the precision of K2 has decreased compared to its predecessor.

To compensate for this, they need to come up with a better analysis algorithm for the K2 lightcurves. This involves better modeling of the stellar systematics and convolving that with a sine wave over many many frequencies to create a systematics insensitive periodogram (SIP). The raw periodogram of K2 data shows a very large feature at the 6 hour thruster times along with aliases of that 6 hour frequency. When they redo the lightcurve periodogram with SIP they are able to remove that large systematic feature and pull out the red giant acoustic oscillations of the host stars.

Using SIP, you can also find a better estimation of the stellar rotation period, since the systematics won’t be clogging up the periodogram anymore. They were able to accurately recover the stellar rotation period once the systematics were removed from the lightcurves using SIP. They can also use this method to find other periodic signals like short period exoplanets, RR Lyrae, and eclipsing binary stars. Their code is available on GitHub (don’t use the tweated version!), and the paper recently came out on the arXiv.

*A Catalog of Transit Timing Posterior Distributions for all Kepler Planet Candidate Events* (Benjamin Montet, Caltech/Harvard)

Ben is working on a number of projects, including a transiting brown dwarf LHS-6343 and a paper on young M-dwarfs. Today he is talking about  transit timing variations (TTVs) (which Daniel Jontof-Hutter talked about yesterday). TTVs can ell us about eccentricities, inclinations, and mass ratios of planets in the same system, all of which can be really difficult to measure using another method.

When looking at TTV curves, the variations in transit timing usually follow a sinusoid, but not all points follow this trend. The current methods ignore non-Gaussian errors, assume white noise, ignore ill-fitting transits, short cadence data, and don’t marginalize over transit shape (if the transit is not properly sampled, current methods usually ignore these points). But correlated noise matters too, and needs to be included in analyses.

Posteriors can help with this. If you fit many transit model models and times of transits, infer the posterior distribution for the time of every transit observed with Kepler, you can use importance sampling to get a handle on correlated noise. Importance sampling can help speed up your computation process bu focusing your computation on places in your data that you know, a priori, that the transits will be occurring. They are currently working on all of the single-transit systems, and multiple systems are nearly ready for “prime time”. They are also looking for a cool name for the project, so give him a shout if you have an idea.

Towards a Galactic Distribution of Exoplanets (Matthew Penny, Ohio State University)

Where are the known exoplanets? Microlensing is the only technique we currently have for probing for exoplanets in multiple areas of the galaxy. RV surveys are limited to nearby stars, Kepler looked in one region, K2 will add a ring around the solar system with more nearby targets, but microlensing surveys look straight into the galactic center to find the frequency of exoplanets as a function of galactic radial position.

Question: does plane formation in the bulge differ from the disk of the galaxy? Different galactic environments can have detrimental effects on the longevity of a protoplanetary disk, or can change the temperature of the protoplanetary disk to impede planet formation. They find that, based on microlensing surveys, there are a lot fewer planets in the galactic bulge than in the disk. They determine this by varying the ratio of disk planet formation efficiency to bulge planet formation efficiency to model the current distance distribution of microlensing planets. They find in their first results that the bulge planet formation efficiency must be lower than the disk planet formation efficiency in order to approximate the microlensing planet distance distribution that they see.

They want to find out what is the most probable distance/location in the galaxy to find exoplanets. They can measure distances to microlensing planets with parallax (for nearby planets), using a Bayesian method, or using the relative proper motions of stars to calculate the distances. While there are still some kinks to work out, this mix of techniques lets them probe a wide range of planet distances and begin to map the galactic distribution of exoplanets.

Constraining the Demographics of Exoplanets Using Results from Multiple Detection Methods (Christian Clanton, Ohio State Univsersity)

There have so far been about 150 confirmed exoplanets around M-dwarf stars. Confirming these planets really takes a collaborative effort between multiple detection methods. M-dwarfs are good targets for exoplanets because they are the most numerous of stars in the galaxy, RV and microlensing surveys are also more sensitive to lower-mass stars.

There have been individual exoplanet censuses of M-dwarfs using separate methods. Some constrain the actual frequency of planets around these stars, other non-detections (direct imaging) place upper limits on this number. If they combine the results from these various techniques (microlensing + RV, and now direct imaging), they can confirm quite a few planets around M-dwarfs and get a constrain on long-period giant planets around these stars. They ask: is there a single planet population distribution that is consistent with all of these M-dwarf exoplanet surveys?

They map the distribution of planets into distributions of the observables relevant to each technique (microlensing+RV+direct imaging). They then determine the number of expected detections for each survey, and compare that with the actual reported results and determine a likelihood of that particular planet population, and repeat for a variety of planet populations. They can then constrain the planetary mass function and power law slope of this distribution very well for M-dwarfs. What this means is that the results of the microlensing, RV, and direct imaging surveys are consistent with a single planet population distribution. They also want to include the results from Kepler to add constraints from transit surveys as well.

Sifting Through the Noise – Recalculating the Frequency of Earth-Sized Planets Around Kepler-Stars (Ari Silburt, University of Toronto)

Kepler has been invaluable in attempting to answer the age-old question of: is our planet unique? Unfortunately, we haven’t yet found a true Earth-analog. We can estimate the frequency of Earth-like planet by extrapolating our results past our detection biases. We first have to overcome our geometric bias: only certain planetary systems are transiting, and there’s a large population of planets that we simply don’t see in transiting surveys because of this detection bias. This is a strong function of planetary radius and orbital semi-major axis.

This bias causes a lot of large error bars and false-positives in the Kepler data – mainly because we don’t understand the stars themselves. Such large error bars can skew our estimation of the number and frequency of Earth-sized planets. What they’ve done is a new way of accounting for the uncertainty in planetary radius by applying our known Kepler detection probabilities for planets based on their radii and combining that with the probability curve of the planet size. For example, the uncertainties of a detection may include a very small size, but we know that detecting something that small is very unlikely, so that value is downweighted. This allows them to correct our error distribution and use this to improve our estimate of the frequency of Earth-sized planets.

They find that with these corrections, the frequency of Earth-sized planets in the Kepler sample is eta_Earth = 6.4%, which is about half of what it would be if they haven’t accounted for the detection biases of Kepler. They anticipate that the Gaia spacecraft will help better understand the stellar exoplanet hosts, which will further increase the accuracy of their eta-Earth value.

A population-based Habitable Zone perspective (Andras Zsom, MIT)

Most people visualize a habitable zone as a stripe around a star that is capable of supporting liquid water. If you look at it in a population perspective, you can see which planets fall interior to the HZ and are covered in water vapor, those exterior to the HZ with ice on their surface (or like Mars that falls right on the ice/vapor limit), and those inside the HZ which can have liquid water.

From observations we have good estimates on the stellar properties and planetary orbital properties, but we don’t know much about the planet properties and surface climate. How can we know the surface climate without knowing the planetary atmosphere?  They describe the HZ as a probability function to estimate the occurrence rate of HZ planets based on this HZ probability. If you treat the stellar and planet properties as random variables, you can create probability density functions out of them. They then sample each variable and use a 1D climate model to calculate the surface climate, repeat this to create an ensemble of climates, and then study the habitable sub-population and calculate their probabilistic HZ.

They find that the most probable area of HZ planets around M-dwarfs occurs around a few times the radius of the Earth, and around 0.5-1 times the stellar flux received at Earth (author’s note: this is a really cool 2D HZ probability plot!). So, they find that the occurrence rate of HZ planets is 0.001-0.3 planet/star for M-dwarfs, but that the surface pressure and atmosphere type strongly impact the surface climate and occurrence rate. We need better estimate of the potential atmospheres of exoplanets. Their code is called HUNTER and is available on GitHub.

The session following the coffee break will be co-chaired by me (Kimberly), so that blog post will be written by Ben Nelson.

ERES Day 2 Session 2: Career Panel

This session is our “Alternate Career Panel,” where we’ve invited three speakers who have all completed an astronomy PhD and then chosen to enter a career outside of a large, research-based academic environment. Our three speakers will be providing their perspectives on careers in a smaller academic environment, industry, and science policy.

Our speakers are:

Eric Jensen (EJ): is a Professor of Astronomy at Swarthmore College. He holds a BA in Physics from Carleton College, and a PhD in Astronomy from the University of Wisconsin-Madison.

Josh Shiode (JS): completed his PhD research at the University of California, Berkeley.
He is currently the Senior Government Relations Officer at the AAAS. Josh was also the John N. Bahcall Public Policy Fellow of the AAS.

Daniel Angerhausen (DA): received his PhD from the German Sofia Institute and then from CalTech. He then moved on to a postdoc at RPI, and is now an NPP fellow at Goddard. He in the n the process of starting up a company that he will tell you more about.

Last minute substitution – Daniel Angerhausen (DA): has graciously agreed to join the panel at the last minute to fill-in for Dave  Spiegel, who was not able to join us.

As I (Kimberly) was the moderator for this panel, I will direct you to the blog post that was written by Robert Morehead, which is posted on the ERES Blog Page

ERES Day 2 Session 1: Exoplanet Instrumentation

Good morning everyone and welcome back to the ERES 2015 blog. Our first session this morning in on instrumentation related to exoplanet observations, and is chaired by Yale graduate student, Joseph Schmitt.

The Habitable-zone Planet Finder Instrument: Pushing the Limits of Exoplanet Detection in the Near-Infrared (Sam Halverson, PSU)

The Habitable Zone Planet finder is am infrared doppler spectrograph with a 1 m/s precision to find habitable zone planets around M-dwarfs. The HPF team is a very large team spanning multiple universities, and multiple departments at PSU.

Why do we care about radial velocities in the near-infrared? The majority of nearby stars (from the RECONS) survey are M-dwarf stars, which primarily emit light in the NIR. There is a high frequency of planets around these M-dwarfs, almost half of them have at least one planet, and this is a population that is largely untapped by telescopes like *Kepler.* A habitable zone planet around one of these stars would induce an RV signal on the order of meters/second, so this instrument will be ideally placed to measure these RVs in the NIR.

The HPF will be mounted on the Hobby-Eberly Telescope (HET) at McDonald Observatory, of which PSU is a partner. The HPF borrows some of the success of HARPS in its design. It is a fiber-fed spectrograph, with science fibers and calibration fibers. It uses a HgCdTe absorbing detector, not a CCD. The entire spectrograph is contained within a cryostat chamber, similar to the APOGEE instrument. To achieve 1 m/s precision in the NIR, they plan to utilize a laser-frequency comb, but are also looking into a Fabry-Perot etalon for a frequency calibrator, which may be of use to the wider astronomical community rather than tailor-made for the HPF observations. The HPF is exploring a new area of exoplanet detection, piggybacking on the successes of previous instruments like HARPS and APOGEE in its design, and developing cutting edge solutions to the complex problem of high-precision RVs in the NIR.

Ultra Precise Environmental Control for High Precision Radial Velocity Measurements (Gudmundur Stefansson, PSU)

The search for habitable planets is exciting! (author’s note: indeed!) Improved radial velocity precision enables us to detect lower mass planets, and HPF will focus on HZ planets around M-dwarfs. M-dwarfs are currently our best bet to look for rocky, low-mass planets in the HZ. NIR detector are better suited than optical detectors to study rocky planets around M-dwarfs. HPF is aiming for the same precision as HARPS, but in the NIR instead of the optical.

HPF will push the boundaries of temperature and pressure stabilities achieved by HARPS. Temperature changes cause the echelle groove density to change, which degrades the precision. This can be on the over of 60 cm/s at a 10 mK change in temperature. Their are aiming for a temperature stability of 1 < mK precision in their cryostat. Environmental control is essential to reach 1 m/s RV precision in the NIR.

The HPF environmental control system opens the path to their 1 m/s goal precision. The components of the environmental control system are largely constructed and fabricated by PSU graduate students. Their actively controlled heaters keep HPF at 180 K with mK stability. They are currently testing and demonstrating the stabilizing effects of their thermal enclosure by testing things at HET. Right now HPF is in mid-integration phase in New York, and they plan to have the integration phase done in a  month or so, whereupon they will proceed to ship the instrument to PSU for further testing.

Improve RV Precision through Better Spectral Modeling and Better Reference Spectra (Sharon Xuesong Wang, PSU)

Detecting Earth is hard, especially in RV. the RV jitter in Keck’s HIRES spectrograph for Kepler 78 is ~2 m/s. Their goal is to accurately model the stellar spectrum, and compare them to empirically derived reference spectrum. They then apply a “best guess” RV, convolve the model stellar spectrum with the instrumental PSF, and iterate until they find the best RV needed to match the reference spectrum. This reveals the radial velocity signal within the stellar spectrum, which allows them to detect planets.

There are a number of things that can confuse this straightforward process. Firstly, barycentric correction terms (see Wright & Eastman 2014 for more details on this). But if you are detecting things from the ground, you may be detecting spectral lines that are not from the star itself, but telluric lines from the Earth’s atmosphere instead.  The telluric lines won’t show the same radial velocity as the stellar lines, which can mess up an RV signal. You need to add telluric lines into your model, or completely mask out regions of telluric contamination, in order to get rid of this. But, there are also *micro-telluric* all over the visible and IR spectrum which cannot be masked out, so you *need* to accurately model the tellurics in order to improve your precision.

Now, they also test the reference spectra of I2 cells at PSU’s Hobby-Eberly Telescope, and found that when they tested the I2 cells recently, the reference spectra were different than what they were 20 years ago. This should not be! So they tested again, and found that the newer Fourier transform spectrum of the I2 cell appears to be more accurate than the older one. In the future, they plan to take the improved telluric calibrations and the improved I2 reference spectra to improve the codes used to calculate RVs. They want a Python/GitHub/Bayesian RV code to be implemented, which will improve the precision and accuracy of ground based. RV measurements all around.

First exoplanet transit observations with SOFIA (Daniel Angerhausen, NASA Goddard)

Spectrophotometry in 30 seconds: sometimes we are lucky to observe edge on transits, but usually we are looking at grazing or secondary transits which are more difficult to characterize. Spectrophotometry looks at the transit light curve in many many wavelengths and then compiles that into a spectrum: so spectro– because they are creating a spectrum, and photometry because they are creating these spectra from photometric measurements of transits rather than a traditional spectrograph. This can tell them about the atmospheric composition, and atmospheric structure of HJs.

SOFIA is a telescope on a plane, a Boeing 747-SP aircraft that flies higher than commercial aircraft. It’s a good compromise between a ground-based telescope and a space-based telescope: they remove some of the atmosphere (99%) that plagues ground based observations, but they can’t observe as often as the ground because of flight restrictions.
It operates in the NIR (0.3 micron to 1.6 mm). It has a wide wavelength regime and is mobile, which is good all for transit observations. “SOFIA is a space telescope that comes home every day” which lets them continually update the instrumentation on the telescope, something you can’t do with space-based telescopes. This means that SOFIA will always have the cutting-edge detection methods (provided that funding exists).

SOFIA had its first exoplanet observation in October 2013 with FLIPO, planet HD189733b, and achieved “space-based” quality of 185/160 ppm precision. As that was the first observation, they expect that the precision and accuracy of their instruments will only improve as they gain further understanding of them. They are currently working on GJ-1214b transit observations. Even when JWST goes up, people will still need alternatives for transit observations, and SOFIA is the perfect not-quite-space telescope.

Suborbital Demonstrations of Starshades(Anthony Harness)

“The firefly and the lighthouse”: an Earth-like planet is 10^10 times fainter than the host star and only 0.1 arcseconds away. This is comparable to trying to detect the light from a firefly that is flying in front of a lighthouse. A starshade is a way to mask out the light from the star and only detect the light from the planet. The benefit to this is that all of the light-masking is taking place outside of you telescope, so if you want full-light measurements (like from a spectrograph) at the same time you can do both at once.

The community needs to do end-to-end system level tests of starshades so that we can prove that it works and and gain confidence amongst the community that starshades are worth it before we spend a lot of time and money making them. The best way to do this is to do real tests with real data on a smaller ground telescope to make a proof-of-concept.

They wanted to try a zepplin – but alas, no such luck. They next moved to a vertical-takeoff, vertical-landing rocket that can hover and be used as a starshade platform for a ground telescope. They want to ensure that the starshade has cm accuracy and stability – if light keeps leaking around the edges, the measurements are ruined. They plan to use two small telescopes: one for measurements and one as a guiding telescope to make sure that the science telescope is still pointed at the star. Rockets are still a bit far off, however, so their first attempts will be a simple stationary starshade on a tall peak that can be angled to follow the star’s path, and make use of a somewhat mobile telescope. They plan to attempt the stationary method this summer (2015), and their ultimate goal is to have the telescope 3km away from the starshade and detect the disk around Fomalhaut. Their initial tests have been able to detect a “planet” at a 10^-8 contrast to its “star”.

Multiband nulling coronography (Brian A. Hicks)

“Nuller” – nulling coronograph. This has similar results to a starshade, in that the starlight is “removed” from the image. Instead of directly blocking the light, a nuller works by using destructive interference of the starlight to detect fainter light that light that is also in the images. In order to detect a Jupiter around the Sun, you need 10^-8 contrast, and for Earth you need 10^-10 contrast.  This is a direct-imaging technique to allow you to get down to these precision levels. HZ exo-Earths require very specific inclinations for transits (they essentially must be within a few degrees of edge-on for us to see them transiting), but direct imaging favors “face-on” planet systems rather than edge-on transit systems, so they can probe an entirely different population of planetary systems and reduce our current detection biases.

Direct imaging, because it favors face-on, means that they could observe a planet throughout its “seasons,” look at variations in the planetary albedo (reflectivity) over time, and possibly look at the effects of weather patters on exoplanets. If they broaden the search away from “habitable,” they could even talk about the “infestible zone,” climates where extremiphiles could live. Spectroscopy of directly imaged planets would require a large telescope, and they want to get the spectrum in a large wavelength-range.

In addition to planets, they want to detect debris disks and protoplanetary disks, and observe the evolution of planetary systems through many stages (protoplanetary, planetary, and debris). They want to design a coronograph that could work with a future space-based telescope like JWST and has capabilities in UV, visible, and IR wavelengths. Exo-C and Exo-S are potential future “Exo-coronograph” and “Exo-starshade” missions, both aimed at direct imaging of planets.

Time for coffee! And then on to the panel on alternate career paths, moderated by yours truly.

ERES Session 6: Multiplanet Systems

Our final talk session of the day is on Multiple Planet Systems, and is chaired by PSU Postdoc Thomas Beatty.

Precise Planetary Masses, Radii and Orbital Eccentricities of Sub-Neptunes from Transit Timing (Daniel Jontof-Hutter, PSU)

Kepler‘s period-radius diagram shows us that sub-Neptune planets are really common, which is interesting because we don’t see any of those planets in our own solar system. We have been able to characterize a few planets using RV as well, and some with transit timing variations. With TTVs we are looking at the very slight variations in a planetary orbital period due to slight gravitational tugs on the planet from other planets in the system.

To characterize TTV transiting planets (Kepler-79 in particular), they can assume that all of the planets are co-orbital (they have the same inclinations). They also assume that there are no non-transiting perturbing planets, since the timing variations closely match what they’d expect for that system with just the transiting planets. They can use TTVs to get the masses of these planets and very good constraints on their eccentricities. This is remarkable because the eccentricities are much lower than would be detectable using RV measurements (RV doesn’t give accurate eccentricities below ~0.1). These eccentricities are 0.2% to 2%, and are still detectable.

TTVs also allow them to characterize the star Kepler-79 very well, with only 2% errors. Knowing the star really well, they can characterize the planets very well. Planet d in the system has a super-Earth mass, but a density of only 0.1 g/cm^3, comparable to atmospherics density on Earth’s surface! This means that the planet must have a much larger radius. When we compare the sample of planets characterized by RV to those by TTV, we see that TTV is looking at a very different sample of planets. Each method has their own biases, so they are finding different types of planets. With TTVs we can find super-Earth massed planets at up to 200 day orbital period, but just because they are super-Earth mass, doesn’t mean that they are rocky!  They can have a huge range of bulk densities.

On the Origin and Evolution of the Kepler-36 System (Thomas Rimlinger, UMD)

Kepler-36 is a Sun-like star with two planets in a 7:6 mean-motion orbital resonance, one a super-Earth and one a sub-Neptune. This is a very unusual orbital configuration: one is high density, one is low density; they are very tightly packed; this resonance is very rare. How did this happen? Most planet formation models can’t do this.

Theory: protoplanets form far out and migrate inwards, are bombarded by Mars-sized embryos, one gets its mantle stripped, one gets mass accreted. However, few simulations result in a 7:6 resonance. This makes this particular method very unlikely, because there are a few serendipitous things that must happen to make this system via this method.

Their method: take this theory above, and modify it. Their method would not require mantle stripping, originally starting in the strange 7:6 resonance, the planets don’t have to swap places. They would start in the outer parts in a 2:1 resonance, and migrate inwards. The inner planet then just sweeps up leftover rocky material to become the super-Earth, and the outer one is left as a lower density material. They modeled this in a simulation and were able to accurately replicate this system with minimal fine-tuning of the model.

Spacing of Kepler Planets: Sculpting by Dynamical Instability (Bonan (Michael) Pu, University of Toronto)

What can the orbits of multiplanet systems tell us about their formation? There are some systems, like Kepler-11, that have many many planets packed into a very small space. These systems are also so-called “dynamically cold,” with low eccentricities, no real variations, similar inclinations. Looking at the distributions of Kepler multiplanet systems, we see two distinct families of multiplanet systems: those with many planets that are dynamically “cold” and those with fewer planets that are dynamically “hot”, that have the freedom to have large eccentricities, inclinations, or more widely spaced orbits.

Are these many planet, cold systems really stable over long times? They simulated a Kepler-11 type system – planets that are all super-Earths and tightly spaced – and dynamically ran the system to see how long something like that could survive. At planetary spacing of about 11 times the Hill Radius, all of the simulation runs survived for the full simulation time (1 billion years). At that spacing, they found that inclined and eccentric orbits destabilized the systems within 1 million years. You would need to increase the spacing even more to stabilize those systems.

They conclude that some of the Kepler multiplanet systems are at the edge of stability, and so they must have been “sculpted” over eons. There must have once been many more multiplanet systems that developed in unstable formations and dynamically evolved to their lower-number planet states.

Implications for the False-Positive Rate in Kepler Planet Systems From Transit Duration Ratios (Robert C. Morehead, PSU)

This talk only applies to the multiple planet systems detected by Kepler. As a reminder, Kepler has very low-resolution CCDs: each pixel is 4 arcseconds wide. And so there is a lot of room in the Kepler photometry for false-positive, blend scenarios, and binary star systems. When we look at these stars at higher resolution we can find out more about them, but we can’t do that for everything.

The ratio of transit duration can probe whether two planets orbit the same star. This is especially useful for systems where we know there is more than one star, or when we suspect that there is a blending scenario going on. The orbit’s eccentricity and the impact parameter affect transit duration ratio. We expect mostly that these systems have co-planar planets, since they are all transiting. They use simulations to calculate the likelihood of the observed duration ratio under different scenarios: all around one star, and a suite of false-positive scenarios.

They find that most multiplanet systems have a high probability of being associated with the same star. Now, the problem with this is that the parameters used here are the original Kepler stellar parameters, ignoring any followup observations made. So, while they conclude that most multis are likely around the same star, there is always the chance that there is a blended source and therefore a more complicated system than one might originally think.

This concludes our science talks for the day. After this we have another poster pop session and our poster session, followed by dinner.

ERES Session 5: Poster Pops

A Poster Pop is a sixty second advertisement for a poster. You get one slide to supplement your presentation, and the goal is to attract people to your poster. Poster pops are a challenge, since you have to squeeze in your message in a short time. It’s a good time to practice your “elevator pitch”: describe your work to someone who doesn’t already know what you’re doing, and do it effectively in a short time (as if you only have the length of an elevator ride). The trick here is that you’re not talking about everything that you’re doing, but rather just on what your poster is presenting.

I myself am presenting both a Poster Pop and a poster. My poster pop is in the later session, so this poster pop presentation post is only related to the early session. Keep in mind, these are my own opinions about what makes an effective poster pop presentation. If you disagree, I encourage discussion!

Alright, now after the poster pops are done, here are some of my thoughts about what makes an effective poster pop:
1. The slide:
– Do: make your figure easy to read
– Do: summarize the takeaway message of you poster
– Do: make your slide somehow related to what you’re saying. If I looked at your slide long enough, would I be able to figure out what’s going on?
– Do: make sure to credit all of your co-authors on the paper
– Do: make sure that the text and background are highly contrasted for easy reading
– Don’t: give away the *all* of the milk for free. Leave them a reason to go to your poster.
– Don’t: make your slide completely un-readable.<br />

2. The content of your pitch:
– Do: say what is unique about your poster. Why should I go to yours over someone else’s?
– Do: tell us where to find your poster
– Do: be excited and speak clearly!
– Do: have good timing! Don’t go over time!
– Do: make what you’re saying related to what’s on your slide.
– Do: make sure that you have a clear beginning, middle, and end.
– Don’t: say the same thing you’re saying in your oral pitch of your poster. This has a different purpose.
– Don’t: just read what you poster says.
– Don’t: stare down at your notes the whole time.
– Don’t: write your whole script out. Be flexible!
– Don’t: make fun of someone else’s work

Phew! That is a lot to fit in to a 60 second pitch. Granted, for our poster pops we have 2 minutes, but that’s still pretty tight.Now that I’ve seen some of these poster pops, I will be well prepared (hopefully!) for my own poster-pop later this afternoon.

ERES Session 4: Planetary Atmospheres 2

This is our second of two sessions on Planetary Atmospheres, chaired by Cornell University Research Associate Ramses Ramirez.

The Pale Orange Dot: The Climactic and Spectral Effects of Haze in Archaen Earth’s Atmosphere (Giada Arney, University of Washington)

Giada is from the University of Washington Astronomy and Astrobiology Program. While we want to know about the habitability of distant exoplanet, Earth will always be the best-studied habitable planet. So, we want to study the habitability of Earth throughout its history. She studies the Earth through the Archean period. During this period (~3.8-3.5 billion years ago), life first developed. We had a lot of methanotrophes (methane-eating bacteria), which were prolific because methane was much more abundant than oxygen in the atmosphere. We can look at Saturn’s moon, Titan, to get a modern-day example of a methane-rich atmosphere. We think that the Archean Earth had an orange atmosphere with a methane haze, like Titan does. Since we think that the Earth at one point was hazy, this is a good phenomenon to study to understand potentially habitable worlds.

What would the climate be like on a hazy Earth-like world? As you increase the amount of methane in the atmosphere, you are increasing the methane haze. At around 30% methane the haze starts to cool the planet by shielding the sunlight, but at some point the cooling bottoms out. Conclusion: hazy worlds can be habitable. In the right conditions, it can work like a reverse greenhouse effect, cooling the planet to make it habitable.

How can we detect hazy atmospheres? Methane haze absorbs a lot of light in blue wavelengths, so objects that are missing portions of blue light in their reflection spectra are likely to be hazy. For transit transmission spectra, when you add haze into the atmosphere you can’t see as deeply into the atmosphere, so your characteristic absorption features are more muted than normal. The spectrum at the ground of a hazy planet shows that the majority of the harmful UV light is blocked, so a hazy planet might have a greater chance at habitability, since the harmful types of light are reduced. As the haze on the early Earth was biologically produced and regulated, they might even be a signature of life.

The robustness of using near-UV observations to detect and study exoplanet magnetic fields (Jake Turner, UVA)

The magnetic fields of planets give us insight into the internal structures and rotation period of exoplanets, atmospheric dynamics, formation and evolution of exoplanets, potential exomoons, habitability, and allow us to compare to solar system objects. Their method involved detecting asymmetries in the near-UV and infrared light curves. In the near-UV, you can detect the bow shock in front of the planet, like a boat going through water. This light curve should have an extended ingress and a shortened egress.

They used the Kuiper Telescope in the near-UV on 15 targets looking for exoplanet magnetic fields. On WASP-77b, they note that the transit does not look like their predicted bow-shocked magnetic field. In their 15 planets, they did not see any asymmetric transit shapes, which puts an upper limit on the potential magnetic fields of those planets. So, either those magnetic fields are really small, or perhaps this effect is not observable using that particular telescope or in that particular wavelength.

They use CLOUDY to simulate the ionization, chemical, and thermal states of the bow shock to see what the simulations say about their ability to detect the asymmetry. They find that with this, there are no species in the near-UV to cause an asymmetric transit, so their non-detections are due to their observing parameters, not a physical property of the planets. They find that near-UV transits are not robust for detecting magnetic fields, and near-UV planetary radii show variations that can be used to constrain their atmospheres.

Characterizing Transiting Exoplanet Atmospheres with Gemini/GMOS: First Results (Catherine Huitson, University of Colorado)

Main aims of Gemini/GMOS is to measure the dominant atmospheric absorbers in exoplanet atmospheres. They have broad-band, low resolution optical coverage. Their 9 planet sample has low densities and good comparison star, is comparative study , and want to understand the systematic noise sources. The survey length is 3 years, which lets them improve signal/noise and increase repeatability. With GMOS, they can get similar precision to HST, but with fewer gaps in the data which allows for better fitting of the transit curve and more accurate planetary and stellar parameters.

MOS = Multi-object spectroscopy. The two spectra are the target and the reference star of the same spectral type, so that they can compare the two stars one wavelength at a time. They get a frame every 50 seconds to build a transit light curve, and as they go wavelength-by-wavelength they can see changes in the transit light curves with wavelength, and so build a transmission spectrum. Using this method they find that WASP-4b is a cloud dominated hot Jupiter.

There are a number of observational challenges that they face during their analysis, and they are finding clever ways of solving each and every problem that arises. They can find through this method that XO-2b is a cloud-free hot Jupiter. Their first important results is that while WASP-4b and XO-2b are very similar planets in some respects, they have very different atmospheric structures, and they can detect that.

Hot and Heavy: Transiting Brown Dwarfs (Thomas Beatty, PSU)

Interesting presentation technique: start with your conclusions!

Conclusion 1: The brown dwarf desert may have an oasis.
Conclusion 2: transiting brown dwarfs provide links between hot Jupiters and field brown dwarfs, allowing us to use observations of one to understand the other (KELT-1b in particular)

Our understanding of the brown dwarf desert has evolved over the past 10 years or. As of last year, we have found 7 BD companions in this region, all around F stars (~6250 K) which are more rapidly rotating than the Sun. KELT, unlike other transiting surveys, don’t ignore F stars, which have sort of been ignored before because RV detections are difficult. But now we see that F stars may have an oasis in the BD desert.

The atmospheres of planets and dwarfs behave differently. There’s a very distinct “kink” in the color-magnitude diagram of BDs at the L/T transition (where methane becomes dominant) that doesn’t exist for planets of the same temperature. BDs have a very tight color-temperature sequence, and HJs are much more scattered. The different behavior tells us how the atmospheres are behaving, particularly with regards to carbon monoxide and methane. People postulate that the methane of HJs shouldn’t be there, because the species is highly irradiated by the star, which it wouldn’t be for BDs.

Well…KELT-1b is a highly irradiated BD in a tight orbit around its primary star. The day side of KELT-1b looks just like a field BD, a late-M or early L dwarf. They want to look at the night side of the BD to see if there’s some chemical gradient between the day and night sides. If so…well, that would be very interesting indeed and tell us about the L/T transition for irradiated BDs, and how that impacts HJs and directly imaged planets.

We now move on to the first of our poster-pop sessions. I will do my best to capture some of the dos and don’t of how to give a poster pop once I have some examples of them to work with!

ERES Session 3: Fellowship and Grant Writing Panel

The first of the panel discussions is about how to write effective fellowship and grant applications. Members of the panel have all applied for, and won, various fellowships. They will be talking about what makes an application effective, important things to think about, and other tips and tricks learned through experience.

The slides from this session will be posted to the ERES website soon.

Our panelists are:

James Owen, Hubble Fellow (JO)
Laura Kreidberg, NSF Graduate Research Fellow (LK)
Brian Hicks, NASA Postdoctoral Program Fellow (BH)
Daniel Forman-Mackey, Sagan Fellow (DFM)

JO: The panelists are starting this session with a short presentation describing key points, specifics for each of their respective fellowships, good proposal writing, anonymous advice from selection panelists, and a Q&A. Participants can grill them further at lunch.

LK: Grad student fellowships are useful, super useful. There are no downsides. Guaranteed funding, no need for TA work if you don’t want them. Also good practice for writing more proposals in the future.

The NSF GRFP is open for senior undergraduates, and first and second year graduate students. Apply all three years, even if you don’t have something your senior year! The application is pretty hefty, so start early and take your time. Two main criteria: intellectual merit (how good is the science?) and broader impacts (why is the useful to others?). Broader impacts can be presentations, conferences, public outreach, STEM mentoring, volunteering, tutoring, etc. Winning an NSF GRFP makes you eligible for the NSF GROW, which allows you to continue your research in a foreign country.

BH: A fellowship program versus a regular postdoc position. Pro: you set your own research program, you control your research budget, more ‘prestigious’. Cons: you’re on your own (potentially no supervisor). For ‘open’ fellowships, you can take the fellowship anywhere, while an ‘institutional’ fellowship is directly associated with a particular place that you then have to work at. There are open fellowships available around the world.

Statistics: ~300 new PhDs per year, ~100 fellowships available per year.

For the NPP, there are multiple application periods per year, and there are around 200 fellows in residence at any one time. There is a good stipend, benefits, and lasts for 2-3 years (the last one is funding dependent).

Advice: communicate directly with the adviser for the research opportunity before writing the proposal. Read the requirements carefully before you begin.

Go to the NASA Postdoc Website for a list of available positions.

DFM:Talking about the Sagan, Hubble, and Einstein fellowships, since they are pretty similar. Sagan is specifically for exoplanets, the Hubble is for anything, and Einstein is more for cosmology/extragalactic. Duration is up to 3 years, good benefits, good research travel budget, good stipend.

Must propose 3 institutions on your applications, institutions can only accept one of each per year. The success rate is about 1:17.

JO: General proposal advice: keep things clean and concise, don’t list too many “in prep” papers, start early and take your time. Know your audience and tailor specifically, do not submit the same one multiple times. DO NOT BREAK OR BEND THE RULES.

A good proposal will explain why your idea is relevant, what is your idea and how you will do it, and why you specifically are the right person to do this project. Also, make sure that your idea is achievable on a reasonable timescale. Why is your proposed institution the right one for the project?

Proposals are not academic papers! They are advertisements for your project and for you. Make your proposal stand out, as reviewers read thousands of pages per season. Get feedback (early) from people both in and outside of your field.

Anonymous advice solicited from reviewers:
1. promise something new, not more of the same
2. don’t make the panel angry. Don’t say how awesome you are, don’t use too many acronyms, make the proposal easy to read, follow the rules.
3. have diverse letter writers. An observer, a theorist, and if possible someone outside of your university.
4. “At the very least, the proposal should not be irritating!”

And now the Q&A portion:

(note: I wasn’t able to actually see the panelists as they were answering questions, so I apologize to the panelists if I attributed one of their comments to someone else.)

Q: Why to ask a paper reviewer to write your letter? What will they bring?
A: JO: The context and scientific relevance of your work

Q: Are there any fellowships that aren’t only for US citizens.
A: LK and DFK: yes, Hubble and Sagan, and some others. Look carefully at the requirements

Q: How do you decide whether or not you should have a direct supervisor for your project or be your own boss? So, postdoc or fellowship?
A: DFM and JO: It is mostly dependent on how confident you are in being your own boss and what your personal preference for work environment is. If you have a good independent project and don’t need firm structure to work, then a fellowship would work. You could also apply for a fellowship under an advisor’s project (“I want to take my fellowship and work on this project of yours. What do you think?”). If you like working more in a larger group, then perhaps a postdoc would be better for you.

Q: What else should you include in your proposal?
A: Audience member who is also a Sagan Fellow: make sure that you talk about successful presentations you have had, AAS or the like. Show that you can communicate your work effectively. When you lay out your project, be specific as to how you will accomplish your goals. Most Hubble and Sagan fellowships don’t go to people right out of grad school; they mostly go to people who already have one or more postdocs under their belts. The extra postdoc first shows your additional experience.

Q: Thoughts on resubmitting the same project with some modifications to make it better?
A: Audience member who is also an NPP: you can do that for sure, take a close look at the comments from reviewers that you get back and you can iterate over the reviews until it works. If the comments look good, it might just be that there was no funding for you that cycle on you’re on the waiting list. Keep trying!

Q: The NPP proposal is significantly longer than Hubble or Sagan. How does that change your writing style and focus?
A: BH: It’s about 15 pages, which is about the length of a research paper. You don’t need to change your focus, but you can elaborate more on points that you have to be concise on in your other proposals. You could also add sections, provided that they don’t confuse your proposal.

Audience comment: Europe is nice. There is also a higher success rate (~1:4) than most US fellowships and the salaries are competitive.

Audience comment (NPP winner, also NSF GRFP and NSF GROW winner): The NSF Postdoc Fellowship application is larger than the NPP application, is due soonest, and can serve as a “first draft” or first attempt at an NPP. You can even maybe get comments back on that proposal before you have to submit your NPP and get more feedback.

Q:  For open fellowship, is it a bad idea to choose as your first choice your PhD institution?
A: JO: the anonymous feedback was split. If you choose it, have a really good reason why you pick that. Personal reasons (like two-body problems) are indeed good reasons. Panel members are people too, and some institutions will even break the “one fellow” rule for a personal reason. If the main reason for an institution is a personal reason, go ahead and put that in your proposal directly. Lame reasons just look lame. Of course, you always need a really good reason for any of your institutions.

Q:  How does having a postdoc help or hurt one’s chances at a position in industry?
A: (question was put off until tomorrow career panel, so this is my general impression): It probably doesn’t hurt to gain more experience that can transfer over. You can gain skills during this time that may be attractive to an industry company. You can make the switch at any time, don’t be intimidated.

Q: Time management? How do you balance everything, when you have dozens of applications?
A: JO: Very carefully. Make a clear schedule for yourself, and realize that you need a good solid few months of time to get everything done, and you probably won’t be getting much research done at the same time. Start thinking about your projects early, and talk to professors about it before you start writing.

This session was a lot of fun. There were a lot of fellowship and grant winners in the audience who shared their multi and varied experiences in applying for and winning grants. Great audience participation!

Now, it’s time for lunch, where we will be continuing the discussions on applying for and winning fellowships.

ERES Session 1: Stellar Characterization Talks

The first session of participant talks is chaired by PSU graduate student Taran Esplin and will focus on characterization of planet hosting stars.

Accessing the fundamental properties of young stars (Ian Czekala, Harvard Smithsonian CfA, @iczekala)

Talking abuot two techniques for measuring young stars and their protoplanetary disks. What are stellar properties of near solar mass stars before they hit the main sequence? How do we find this out? Stars start out above and to the right of the MS, and stars of different masses take different paths and take different amounts of time to travel from their initial positions to the MS. Lower mass stars take the longest time to hit the MS from their initial positions.

Technique 1: protoplanetary disk radio intereferometers. 3D structure model to get temperature, density, and velocity as a function of stellar mass. Then imaging across a CO line reveals the *kinematic fingerprint* of the star. This can dynamically weigh single stars in their “teenage” years.

Technique 2: using stellar spectroscopy to get the stellar mass. Get a spectrum of a star and you can usually get pretty good info on the effective stellar temperature and stellar radius. But, right now we only really use parts of the spectrum that we are very familiar with. What happens if we can fit an entire large chunk of the spectrum? With a  more complex spectral model to use in fitting we need to be more careful with the statistical methods we use to guarantee a good fit (need to be more careful than a simple chi^2!). Essentially, do the residuals between your model and your data resemble white noise? If not, you may need to query a covariant noise matrix to model your noise residuals more carefully.

In the future, we can combine dynamical masses from ALMA/SMA. The current sample is about 20 stars, and they hope to calibrate the early HR diagram.

Defining the Range of Chemistry for Exoplanet Interiors (John M. Brewer, Yale)

While we know of a lot of exoplanets, there are very few of them where we know their masses accurately. The problem is actually getting an accurate stellar surface gravity, which affects estimates of radius and temperature, since the parameters are degenerate.

They use Spectroscopy Made Easy (SME) to get a better handle on surface log(g) of the planet. They use over 7000 lines to get this better estimate. By comparing to stars that have asteroseismic surface gravities they can test whether their SME procedure works better than their previous procedure. SME does a substantially better jobs, excepting a few stars that are rapid rotatrs, which mess up the comparison.

There doesn’t seem to be any trends in derived log(g) with other parameters, meaning that the SME method works well across a wide range of stars (excepting rapid rotating stars). However, they want to now use this to get stellar compositions, and they do this using asteroid spectra to test the accuracy of the methods (this assumes that we know the composition of the Sun).

For asteroids, they test their method using the ratio of carbon to oxygen (C/O), since that ratio is highly dependent on initial Solar composition. In their stellar sample, they find very few high C/O ratio stars (very few diamond planets, boo).

Exoplanet properties require accurate knowledge of the host stars, and the stellar log(g) is a key parameter to characterizing the star. Look out for a catalog to be published with all these parameters soon

Broadening our Horizons on Short-Period Stellar and Substellar Companions with APOGEE (Nicholas Troup, UVA)

Serendipitous science from APOGEE. About 1/2 of stars are in binary (or higher order systems) and on average every star has 1 planet. Companions can be stellar binaries, planets, or brown dwarfs. Brown dwarfs are the “missing link” between stars and planets. A strange phenomenon is the “brown dwarf desert”, which means that there is a lack of BD companions within 5 AU of a solar-type star. This is strange, because we have lots of Hot Jupiter planets very close to their stars and at near-BD masses.

APOGEE was meant as a galactic structure survey, but has been very useful for exoplanet discovery and characterization. The APOGEE RV Companion Survey takes stellar spectra and can pull out stellar parameters and abundances in the APOGEE samples. Their spectral model is exceptionally good at fitting their APOGEE spectra to get stellar properties. They can then measure radial velocities and search for planets. They have to go through a rigorous false positive analysis to weed out the “not planet planetary-like signals” from the real planets.

About half of their stellar sample is giant stars, which really haven’t been searched for companions all that much. They have a galactic distribution of stars, looking both inwards and outwards from the galactic core. While their sample is mainly in the thin disk, there are a few thick disk and halo stars, as well as globular cluster and open cluster stars. After first analysis they have about half of their sample as stellar or BD companions, and the other half are potentially planetary companions. Using their methods they hope to “map the shores” of the BD desert, and build a galactic map of companion frequency.

Re-characterization of a gravity-darkened and precessing planetary system PTFO 8-8695 (Shoya Kamiaka, University of Tokyo)

This system is a T-Tauri star + hot Jupiter. The transits that were observed in 2009 and then in 2010 don’t have the same shapes, and they are trying to figure out why that is.

The star is rotationally deformed – that is, it’s rotating so fast that it more closely resembles a football than a soccer ball, and the central band of the star is gravity darkened, while the poles are brightened. There is also a precession of the orbital axis angle relative to the stellar spin axis, called “nodal precession”. Together, these two phenomena can explain the time-variable transit lightcurves. Since this system fits this scheme so well, it’s an ideal benchmark case for this model. Previous work in this area does not favor such a synchronized state (the two components of the model varying in a synchronized way) with the serious misalignment found in PTFO 8-8695.

Their method of characterizing the synchronicity of the nodal precession and the gravity darkening is expected to unveil the properties of younger or hotter stars, which are known to be more rapidly rotating than older or cooler ones.

Next session is the first of our Planetary Atmospheres talks. Stay tuned!

ERES Day 2 Session 3: Impostor Syndrome

A very critical issue for many (most) early career scientists is “impostor syndrome” (IS), which is defined as “an internal experience of intellectual phoniness despite external indications of success” (Clance P, Imes S. The impostor phenomenon in high achieving women: dynamics and therapeutic intervention. Psychotherapy: Theory, Research and Practice. 1978;15:241-247). PSU Astronomy professor, Jason T. Wright, is speaking about impostor syndrome, how it applies to you, and the effects and symptoms of IS.

We asked Jason to give this talk, and at first he seemed a bit surprised, and perhaps felt unqualified to give this talk. But, perhaps, this is the hallmark of IS. Jason agreed, however, did a bunch of research, and now feels ready to give this talk to us all.

Who is this IS talk for? Those who suffer from IS are often unaware that they do, and so those people need to be educated so that they can overcome it. If you have students, you should also know about this, since your students and advisees may have IS.

IS was first quantified in 1978 as “impostor phenomenon” and only applied it to successful women, those who, despite numerous accomplishments in their field, persisted in thinking that they weren’t skilled enough for the job that they have and were only fooling their peers into thinking that they belonged.

Jason anonymously surveyed ERES participants about their own thoughts and experiences with IS before the conference. His results show that most participants believed that around 70% of their peers have been affected, at least mildly, by IS (they included themselves in that percentage). This is a persistent and widespread phenomenon, and we need to educated ourselves about this more.

IS is…
– a mismatch between external evidence of accomplishments and self-image
– feeling fraudulent or phony, having achieved success not though general ability
– a distorted, unrealistic, unsustainable definition of competence
– a fear of being “discovered” not to be worthy of position or honors
– feeling of having deceived others to achieve position

All people, regardless of their accomplishments in life (like Jodie Foster and Meryl Streep) can be susceptible to IS. But, this can apply even to the “Meryls” of science. Or the supporting actors of science. Jason gives a poignant example in the form of John Asher Johnson (with Dr. Johnson’s permission of course), quoting Dr. Johnson’s own talks and feelings of IS.

What are some of the misconceptions that contribute to IS?
– success is primarily dye to extreme amounts of narrow technical competence (“The Cult of Smart”)
– competence is a fixed trait that some people have others do not
– the most successful, competent people are perfectionists who never make a mistake and who never take on a problem without the necessary preparation

Well…none of these are actually true statements! Academic and research success is not based on any one quality, but rather exists in multiple dimensions. These include: the ability to identify important and answerable questions, the adeptness at basic complex problem solving, the ability to persevere on a problem, the possession of knowledge and skills, curiosity, luck (whether random or manufactured), and communication. This list comes from Ed Turner and Scott Tremaine, and is expounded upon by John Johnson on his own blog post.

The point of this is that success in academia is a constantly evolving process, and it acquired from this set of skills that improve and evolve with practice. No one gets everything right the first try (or even the hundredth!). Most academics work on problems that are outside their area of expertise, and take that risk of mistake in order to work on something interesting or valuable.

How can you begin to overcome IS? It’s something that you can do something about, both for yourself and for others. Talk about it! Normalize it! It happens to everyone, so there is no shame in feeling it. Try to emulate the the personality traits of people you look up to (“fake it until you make it” or “try to live the dream”). Find supportive people to talk to and to discuss the problem. Make note of the nice and complementary things that people say about you: make a file, save them, refer to them, and BELIEVE IT!

IS contains within it some inherent double-standards. You think that your own successes are due to luck or deceptions, but everyone else’s successes are due to skills. You respect your peers’ and superiors’ judgments and knowledge — except when it’s about you. Also…you think so highly of yourself that you can deceive everyone you meet, but you don’t have enough skill to do what you’re trained to do. Acknowledge these logical flaws and use them to combat IS.

IS is not a recognized mental disorder, but happens to everyone. This phenomenon occurs across all demographics, no one is immune. If someone comes to you to talk about this, don’t brush it off, don’t shame it, be supportive and have an open discussion. Learning to combat IS, both in yourself and in others, is something that can only benefit this community at large.

ERES Session 7: Poster Session

As I am also presenting a poster, this isn’t a true-live blog, but rather my thoughts from the poster session. You can see the list of poster titles on the ERES schedule. Based on my own experiences, there are a few components to a poster presentation that help with effective communication:

1. An effective poster: I can go on and on about this, and in fact I have done so. My own poster at the conference was entitled “Best Practices for Effective Poster Design,” a copy of which can be found on my personal blog. If there are three things that I can stress about effective poster design they would be:

– Keep your words clear and concise. Abstracts and large blocks of text don’t belong on a poster. You can have more details on a website if you want, or a link to your paper that has all of the big descriptions, but you poster is *the visual aid for your oral pitch*. It should not be your paper in visual format.

– Use graphics that are easy to understand and that aid in telling your story. Make sure that your audience knows what they should be learning from each graphic even if you’re not there to explain it to them. You can do this with annotations, captions, and other visual clues. This means that the graphics that you use on your poster might not be exactly the same as the graphics that you use in your paper or in your presentation. Use as many graphics as you need to tell your story (a picture is worth a thousand words!), but no more than is necessary.

– Make sure that whatever organization scheme you choose and the style and colors that you choose don’t distract from the content of the poster. All your stylistic choices should only aid content comprehension, and not detract from it. Things like using clear organizational structure, simple backgrounds, and only a few colors will keep your poster from looking cluttered.

2. A clear and concise oral pitch: I say that your poster is the visual aid for your oral pitch, but that means that you need to have an effective oral pitch as well. An oral pitch for your poster is usually around 5 minutes long, and should take you audience through you poster with a more in depth explanation than is on the poster itself. While you are doing this, you should be referencing graphics, charts, or numbers that are on your poster.

3. Good note-taking ability: This is perhaps the most looked over skill in a poster presentation. By note-taking, I mean you the presenter taking notes on the interactions that you have at your poster. Who did you talk to? Who showed interest? What did you talk about? Did they have ideas or followup questions? Did they leave an email for you? All of these are tools that help you, the presenter, learn during your own presentation. Following up with the people you interacted with will also help develop your networking skills.

That’s it for today folks! Thanks for tuning in and I’ll see you bright and early tomorrow morning!

ERES Session 2: Planetary Atmospheres 1

Our first session of planetary atmosphere talks is chaired by PSU graduate student Natasha Batalha.

Hubble Space Telescope Spectroscopy of WISE Detected Brown Dwarfs (Adam C. Schneider, University of Toledo)

The Wide-Field Infrared Survey Explorer (WISE) is an all-sky survey which is maps the sky in four infrared wavelengths, ranging from the near-visible to the mid/far infrared wavelengths. As such, WISE has been very very good at detecting very cool sources that appear bright in the infrared, like brown dwarfs. He’s talking today mostly about Y dwarf stars, which are the coolest known brown dwarfs (but still aren’t planets). Y dwarfs range from around 400 K to 250 K, which is near the temperature of Earth! Y dwarfs pop out in WISE images as bright green points, and we’ve found around 20 of them so far.

“Why WISE Ys?” These anchor our low-temperature spectroscopic models, places where “regular” stars don’t emit a lot of light. Y dwarfs, because they are near the temperatures of planet can help us understand exoplanets, too. They are the “missing link” or the “crossover” objects. They are so faint that we can’t really detect them using ground based telescopes, so we go to space with the Hubble Space Telescope (HST) and WISE.

They use HST to take spectra of their Y dwarf spectra in the Y-band (Why the Y-band of Wise Ys?) because as you get to lower and lower temperature, there is a very distinctive absorption feature in the ammonia band that appears in the Y-band  in spectral models . Their question is: as you go to lower temperatures, why does that absorption feature appear? Where does that ammonia go? Their answer is vertical mixing: the ammonia that would normally appear in the upper layers of the Y-dwarfs and cause that ammonia line are getting mixed into the lower layers, and so there is less ammonia than is expected. However, simultaneously fitting the near- and mid-infrared images is still an issue, as they want to fit all distinctive features at the same time. That issue is ongoing.

Hot Jupiter Atmospheres Revealed with HST/WFC3 (Laura Kreidberg, Univsersity of Chicago)

Transit modeling code: batman = Bad-A* * Transit Model cAlculatioN, currently in development and online at github.  If you help with testing and debugging, Laura Kreidberg will buy you a beverage.

They are observing HJs WASP-43b and WASP-12b using an HST program called “Follow the Water.” As the name suggests, they are looking at water bands of these HJs to get precise water abundance estimates. They find about 0.5-.75 times the solar water abundance in WASP-43b. This is important to know because water is a key molecule in planet formation. WASP-43b also very nicely follows the mass-metalicity relation for planets, that more massive planets have fewer heavy elements than less massive planets.

WASP-12b, also a HJ planet, is the “canonical” carbon rich planet, where the C/O ratio is greater than 1 (recall from the last session, most stars have C/O around 0.5). Previous estimates of the C/O ratio are based on emission spectra, and they are taking a transmission spectra (though the atmosphere), and detected a very strong water feature. This is strange because with a high C/O ratio, most of the oxygen should be bound up in carbon monoxide or carbon dioxide, not water. From their transition spectra they find that an oxygen rich model is more accurate than a carbon-rich model. For this, a C/O < 1 is a million times more likely than C/O > 1 (i.e. an oxygen rich model is much much more likely than a carbon rich model).

In the future, they (and we) need to study the whole planet to characterize the atmosphere (not just the temperature/pressure structure, or the atmosphere) but the whole thing at once. We need to reconcile our results from the emission spectra and the transmission spectra. They want to break the degeneracy between models of temperature/pressure and composition. Hopefully breakthroughs will be forthcoming!

Emission and Phase Curves from 3D Exoplanet Atmospheres (Y. Katherina Feng, UC Santa Cruz)

Katherina (a recent PSU graduate) is talking about the emission from the planetary atmospheres using 3D model atmospheres. This will help us figure out what kinds of spectra we will be seeing when the James Webb Space Telescope finally launches, and so that we can characterize our future observations. JWST will be much more precise and accurate in the infrared than current telescopes, so we need to understand what these planet should look like in JWST spectra. Doing 3D models will help us figure out what limits our accuracy in detection and modeling, and what biases are inherent in our 1D models.

They are testing a new 3D radiative transfer code “SPARC” to test the opacity grids against the 1D models, and have found that there are some discrepancies. If they use the same opacity grids as the 1D models do, the 3D code and the 1D codes match more closely. They apply this to WASP-43b and find that the assumed inclination of the planet has little effect on the spectral solution (they then assume an edge-on system).

When they look at the atmospheres in a variety of wavelengths and phases, they see that atmospheres are really complex 3D structures, and a 1D analysis of the atmosphere may not cut it. First, is there a difference between the day and night sides of the planets? the 3D model does show differences between the day and night side profiles. Is the 1D model biased towards the day side? Yes, it is. Our measurements in should essentially be an average of the day and night side, but 1D models are more biased towards day side values. They plan to test the limits of exoplanet spectroscopy using their new 3D models and ferreting out the biases in our 1D retrievals.


We’re going to be changing gears and talking about how to write a successful fellowship or grant proposal with our Fellowship Panel, made up of four successful fellowship winners.

How to detect planets outside the solar system: The Transit Method

Hey everyone out there! This is the first in a series of posts about how you detect planets around other stars: the aptly named extra-solar planets, or exoplanets. There will most likely be 3 more posts like this dedicated to the other three major detection methods: radial velocity, gravitational microlensing, and direct imaging.

But for now, we’ll start with my favorite method, as this is what got me interested in exoplanets in the first place. As always, I appreciate comments and questions about content, as that always helps me improve my writing.

Now sit back, relax, and enjoy the following feature on….

The Transit Method

The transit method for the detection of planets is possibly the most well known way to find extra-solar planets. This is in large part due to the resounding success of NASA’s Kepler mission, which has revolutionized the search for and study of exoplanets (planets outside of our solar system) since it first started observing more than 150,000 stars near the constellation Cygnus. Kepler detects planets using the transit method, which looks for the minuscule dips in the light from a star when a planet moves between the star and our telescope.

This method is almost like looking for the shadow of the planet: imagine you are staring at a lit flashlight (no, please don’t actually do this, this is bad for you). The flashlight is very bright, very constant light that doesn’t change. Now, imagine that as you are looking at the flashlight and a bug flies right in front of the flashlight. Now, you can’t actually see the bug because the flashlight is so bright in comparison, but you can see that the flashlight’s light is slightly dimmer than it was before. That is a transit.

If something transits a star at regular intervals, Kepler scientists flag it as a planet candidate that needs further investigation. As of October 16, 2014, Kepler has detected 4234 planet candidates. Scientists then spend a lot of time making sure that the detected “planet” is not something else that can mimic a planetary transit signal. After the candidate has been vetted, it moves from a “planet candidate” to “confirmed planet” and enters the ranks of the other 989 planets confirmed by Kepler scientists.

HAT-P-3b transit

GIF of HAT-P-3b in true color, created using the function TRANSITGIF from Jason Eastman ( As the planet moves in front of the star, the light detected from the star decreases, then levels out in the middle of the transit, then increases back to the original level as the planet moves away from the star.

What makes the Kepler mission so remarkable and so unlike any other transit detector of its kind is that it was able to stare constantly at the exact same 150,000 stars for 3 years straight, obtaining a very large amount of extremely precise data. It was able to do so due to its very precise guiding system, which gave out for good in May 2013. Still, astronomers have found a way to make use of the perfectly functional telescope using an alternate guiding system — see K2 link below. Unfortunately, the reaction wheels gave out right before Kepler would have obtained enough data to detect an Earth-like planet around a Sun-like star. However, we still have detected more planets than previously thought possible, so I still call Kepler a resounding success!

Video courtesy of NASA Ames/SETI/J Rowe: Blue planets are non-Kepler planets discovered. Red planets are Kepler planets from before Feb. 2014, and gold planets are just those confirmed in February 2014!  The Feb. 2014 planets are from Rowe et al. (2014).

The planets detected by Kepler range in size from larger than Jupiter to smaller than Mercury, and can orbit their stars anywhere from almost a year to under a day! With the most recent releases of the Kepler data we have started to detect Earth-sized planets in the habitable zones of their stars — amazing! Though we have yet to detect a true Earth analog, we are starting to close the gap and we will most likely detect one in the next decade. Keep a lookout!

Kepler Planets

Plot showing the distribution of planet discovered by Kepler and other detection methods. The distance from the planet to its star is on the horizontal axis, and the planet’s mass is on the vertical axis. The planets’ masses for Kepler planets are largely calculated by relations between the planet size and mass from a selection of stars that have both properties measured.

Summary of the Transit Method:

Direct/Indirect: Indirect. Uses the changing light from the star to infer the presence of a planet.

What you learn from a transit: Planet size, orbital period, orbital distance, orbital inclination, eccentricity.

Other things you could learn: if there are other planets in the system you haven’t detected! If there are other planets there that are interacting with your detected planet, that planet can tug on the detected planet and vary the transit time that you detect. These tugs, called Transit Timing Variations, change the time that the detected planet transits, and can indicate the presence of one or more undetected additional planets in that system.

Other telescopes that use the transit method: K2, CoRoT, HATNet, WASP, TESS (future)


Well folks, that’s it for now. This was only a (very) short summary about a method that has broken tremendous ground over the past decade. There is a lot more information about transit detection, transit analysis, and various follow up and false-positive analysis methods that I haven’t even touched on here. But that’s a different post.

Hope you enjoyed learning a bit about the transit method. Next up: Radial Velocity.

Skip to toolbar