Graduate student and astronomy writer

Communication

On the Road to a PhD: Thesis Committee Meetings

This blog post is part of a series I’m writing along the road to my dissertation. These posts represent my personal experiences centered around getting a PhD in Astronomy & Astrophysics, and all views expressed within are my own. This is my story.


So, there’s a really important part of getting a PhD that I haven’t talked about yet, and that is the thesis committee. I just had what will probably be my final thesis committee meeting before I defend my dissertation, so I thought this would be a good time to talk about what a thesis committee is, what they do, and why we meet with them all the time.

A thesis committee is a group of usually 3-6 professors whose task it is to evaluate your readiness for being a Doctor of Philosophy in your field. They approve your dissertation, listen to your defense talk, and ask you many many questions to evaluate the breadth and depth of your knowledge on your thesis topic. Their goal is to see if you work and think and act like an expert in what you claim to be an expert in.

…that’s a weirdly accurate description

Many people think of the thesis committee like the gatekeepers of academia. Like…the Keeper of the Bridge of Death from Monty Python. “Who would pass the Doctoral Defense must answer me these questions three, ere the other side she see.”

But in reality there’s a lot more to a thesis committee than just a group of test examiners. Your thesis committee should be comprised of people who know you and the work you’ve done during your graduate career. They are your advocates, your support group, your guidance, and your allies. They monitor your progress from the time you start seriously working on things you’ll put in your thesis, make sure you’re making good academic progress, check that you’re developing skills needed to later go into the workforce. They suggest collaborators, ideas and points of views you haven’t thought of, directions for your work or connections you could make.

They ensure that you’re applying for the right jobs, that when you graduate you’ll move on to something you’re qualified for. And they make sure you’re qualified for what you want to do. They provide a realistic view of your accomplishments in grad school and how they will be viewed outside your little academic bubble. They know because they’ve been there and they have the perspective that you as a graduate student don’t have from the trenches.

And if they don’t do those things, they shouldn’t be on your committee. Period.

To do all of these things, your committee needs to meet about once every 6 months to a year during the titular “thesis committee meetings.” These are organized by me, the grad student. During committee meetings, the grad student updates their committee on the research progress they’ve made since the last meeting, any academic achievements gained or milestones passed, career related things…essentially anything the committee members need to know in order to help you advance (aka, their job). This can take anywhere from an hour to three hours (oi vey, that was a loooooong meeting).

And one of the hardest jobs in academia is getting multiple professors in the same room at the same time for longer than an hour. Seriously, it’s like herding cats. And I have six of them to wrangle. Professors, I mean. Not cats. Planning thesis committee meetings is a serious test of your organizational and management skills. That should go on my resume…Anyways. Back on topic.

Your direct academic or research advisor is the head of your committee. They generally know you and your work the best, have seen nearly all of what you’ve worked on during your X many years as a grad student, been the PI on your research projects, etc. They can fill in the blanks if your other committee members have questions you can’t answer. Then, the rest of the committee is generally up to you.

The Thesis Committee - PhD Comics

Seriously, will they read it? Who knows!

OK, probably not this. Definitely not this. This is so wrong, so don’t think this is what a thesis committee should be. Your professor shouldn’t be your worst enemy. They are the most realistic about your accomplishments, know what you’ve done and what you still need to do. Your committee shouldn’t have an adversary. Your committee should only have allies. Honest ones, but allies nonetheless.

Maybe people you’ve collaborated with on a project, or who work in a closely related subject, or are a mentor of yours. Someone only peripherally related to your work, but who has a broader background that you can draw on. Someone who remembers what it’s like to be a grad student.

In my opinion, a token “famous” person can be a pretty useless committee member unless they know you and your work. Someone who doesn’t really know you or your work, or care to do so, but who you can brag and say was on your committee. Eh, sure, if you have all the rest of the support network in place, go for it. But someone who isn’t your advocate or doesn’t pull their weight on a committee doesn’t really serve a purpose.

Most (all?) graduate programs require that one of your committee members is someone outside of your department. This is ostensibly to get an objective outside view that your work is PhD quality. Some people go for a random person (like an english literature professor for an astronomy PhD) just to check the box without adding more difficulty. I find that pretty useless unless the outside person can, say, help with career goals or something.

My outside people (I have two of them, by the way) are from the Department of Geosciences. One works in a research area related to astrobiology and so ties in well with extra-solar planet research (also ran our astrobiology field excursion to Italy a few years ago…awesome caving trip!). The other teaches about scientific and science writing from a scientist’s perspective, which is related to both additional work I’ve done for the thesis and my future career goals.

Also, don’t add a “token” female professor or other “token” minority professor just to say that your committee wasn’t completely older white men. That’s just insulting to those professors who aren’t being included for their expertise and therefore aren’t being treated equitably. “Tokenism” is just a bad idea all around, ‘kay?

I’m of the opinion that if a committee can’t challenge you and find the boundaries of your knowledge, they aren’t doing their job. That’s because I want my PhD to mean that I’m good enough to have one, and I want the people who tell me that I’m good enough to know what that means. I don’t want any “gimmes” with this.

So…that’s a thesis committee and why they meet. I have six members on my committee, which is two more than I really need for the defense. Their areas of expertise all align with areas I’ve worked on, I’ve written papers with most them or been a student in their class, I’ve gone to all of them for advice at one point or another. I’ve met with them four times since I started graduate research, and they’ve recently agreed that I can defend my dissertation in May.

So…woohoo! They think I’ll be ready! And…oi vey, I have to be ready…deep breaths. Here we go!

FYI, I use WhenIsGood.net to schedule my committee meetings. I like it more than Doodle.

 


On the Road to a PhD: Graduation timeline…yikes!

This blog post is part of a series I’m writing along the road to my dissertation. These posts represent my personal experiences centered around getting a PhD in Astronomy & Astrophysics, and all views expressed within are my own. This is my story.


So, I’ve known that I intended to graduate at the end of this semester since the end of last summer. I met with my thesis committee (which seems to just keep growing!) and presented them with a timeline of work that I intended to finish up, when I would be able to accomplish that work, a (very) rough outline of my thesis and they said…great!

Well, actually, they said, “This is a very ambitious set of goals in a very tight schedule with not much leeway. If it was anyone else but you, Kim, we’d have concerns. But since it’s you…” As if that wasn’t just a little bit of pressure. But hey, it’s always nice when your committee recognizes that you are a ridiculous overachiever capable of serious multitasking and incredible workloads when you put your mind to it.

So I’ve been working hard the past few months to keep on schedule, which has mostly been successful. A few of my projects have taken longer than expected, but that’s how research usually goes.

Only, I didn’t take into account the fact that Penn State requires an absurd amount of lead time between when you defend your dissertation and when they let you walk at graduation. The Penn State Spring Commencement is at the start of May, and in order to walk in the ceremony (and you know, get your diploma and such) you have to defend your dissertation to your committee two months before that. That’s the beginning of March (or a little over a month from now). Yikes! Seriously Penn State, more than two months before graduation?!? Seriously?!? (sidebar: how many of you read that in a “Grey’s Anatomy” voice? No? Just me then…)

So, I had a hard think a few days ago and tried to take an honest look at what I still needed to do, the time frame in which I had to do it, and how long it would really take to finish all that work. And get a mostly final draft of the thesis. And put together a talk. And prepare for the defense. Whilst simultaneously completing a science writing internship and finding a new job.

I came to the pretty tough realization that I couldn’t get all of that done in a month. For all my multitasking and overachieving abilities, I am only human (no radioactive spiders or Gallifreyan DNA over here, thank you very much) and I have limits. I can’t get all of that done in a month, while maintaining a good quality of work, supporting a husband who is also defending his thesis in a month, and keeping myself in good physical and mental health.

So, after having a good and honest talk with my advisor, I decided to wait to officially graduate in August rather than May. However, I don’t actually want to still be in grad school in August doing graduate work (chas v’shalom!). I want to have a real job doing real work in science communication. And my husband will have graduated by then and hopefully have started a new job as well. And research funding would be an issue throughout the summer.

So, the solution is to finish up all of the work I need to do in order to graduate (i.e. apply to graduate, format my thesis, defend, submit my thesis, sign the forms) by May, move on to my new job, and then come back in August to walk the walk. I’ll still technically be enrolled as a student, but will have everything done and a letter from the University stating that I’m ABG (“all but graduation” in the common parlance). It might be messy, but everything will get done, there won’t be two people in this house both frantically trying to defend in a month, funding will stretch, I won’t go spare trying to handle more than I am able to, and I will still be able to start a new job in May. Seems like a solid plan to me!

Now, I’ve actually got to get my committee together to let them know the new plan. Six professors all in the same place at the same time…oi vey.


On the road to a PhD: My AAS DTalk

This blog post is part of a series I’m writing along the road to my dissertation. These posts represent my personal experiences centered around getting a PhD in Astronomy & Astrophysics, and all views expressed within are my own. This is my story.


(Written during the AAS229 meeting in Grapevine, TX)

The annual winter meeting of the American Astronomical Society (AAS) is *the* conference to be at. There are many other conferences during the year, most of which are focused on smaller subfields or specialties in astronomy, and there’s even a summer AAS meeting. But the winter meeting is bigger, more well attended, more timely, and more important for a grad student than the other ones. Your collaborators are here. Your peers are here. Your potential future employers are here. And one of the milestones in your astronomy career can only happen at a AAS meeting: giving your public dissertation talk (DTalk).

Oral presentations at AAS meetings are a sprint and a gauntlet: you get a single, 10 minute time slot that they recommend you split into a 5 minute talk, 3 minutes for questions, and 2 minutes for transition to the next person. 5 minutes to explain everything you’ve been working on to a group of hypercritical scientists who are only partially paying attention to you (I’m actually writing this post as I’m sitting in someone else’s talk, not paying as much attention as I should).

But, when you’re within a year of finishing your dissertation (before or after you defend) you can give a *20* minute talk. Once in your career you get the chance to tell the rest of the (U.S.) astronomical community what you’ve spent your graduate career working on, and the DTalk is their way of acknowledging that you’ve arrived. You get one shot, and mine was on Jan. 5th, 2017.

This series of tweets sums things up:

There was sort of a sense of surrealism that I’d finally “made it,” that it was my turn to give the all-important dissertation talk. I felt similarly when I had to apply to give the talk, and there was a checkbox that said “Do you confirm that you are within one year of attaining your PhD?” My brain went “oh my gosh oh my gosh oh my gosh I’m graduating soon…” The sense of “Have I really done enough to graduate? Are they really letting me graduate? I *can’t* have done enough for this.” A serious case of imposter syndrome, that they would actually give *me* a PhD for the work that I’ve done… #omgthesis seemed to sum it up.

That…didn’t go as well as I’d hoped. I’ve always had an issue taking credit for the things I’ve done, and I think that it’s only gotten worse over time. I downplay my academic accomplishments, a holdover from “little sibling syndrome” and “try not to brag about being smart in school.” Attempting to make the switch from “we did this work” to “I did this work” was very difficult, and I don’t think I did it well for the DTalk. Maybe it’ll go better for my actual defense…

Yeah, that didn’t help with the nerves. The big room, where all the plenary talks are, large enough to hold the entire meeting. It wasn’t anywhere near full for mine, but when I was trying to get over my nerves and the sense that I shouldn’t be there, being in the largest and most grandiose room possible didn’t help. About the only thing that was good about that room was the huge timer in front of the stage counting down my time. I could have done without that unconscious pressure.

It’s true. I was nervous, and I’m not one that usually gets nervous talking in front of people. I’ve been on a stage since I was 10 years old, performing in school plays and musicals and then later, giving science talks to the general public and participating in radio and web science shows. I know how to speak about science, and I know the work I did, but somehow I don’t think that came across. Though others have said that the talk went well, I was not particularly happy with my own performance. *I* knew that I stumbled and stuttered and flailed a bit, even if it didn’t seem that way to an audience (one of the key rules in theater is that the audience won’t know you messed up if you don’t tell them…I’m usually good at that).

My overall impression of my talk was not the best. I knew that half the things I said I didn’t want to say, and half the things I wanted to say, I didn’t. That doesn’t usually happen to me, which makes me not like how my talk went. I spent the entire 10 minutes of the talk after mine reminding myself that it’s done, I can’t change it, I have to accept what happened and move on because stressing and overanalyzing my performance isn’t going to make it better. I think I might have convinced myself.

Do I feel different? Sort of. Though my advisor and my thesis committee agree that I can graduate in a few months, now the rest of the astronomy community knows it, too. I’m now subject to, “So, you’re on the market now, right?” “Where are you looking for postdocs?” (and doesn’t *that* open up a new can of worms?) “When is your defense date?” (not scheduled yet!) It highlights all of the things that are on my to-do list before I can *actually* graduate. And perhaps the expectations and pressure are all in my mind. It wouldn’t be the first time.

It feels like the DTalk was the starting line for the race to defense. Certainly the past 5 years of grad school, and the 4 years of undergrad, and even the 4 years of high school before that were all training grounds for this race. I’ve trained and I’ve practiced, and I’ve done the smaller races leading up to this one. I know I *can* do it. Now I just have to actually follow through.


Multimedia Astronomy Communication – Poster Page

This is the page for my poster on  Multimedia Astronomy Communications, presented at AAS229 in Grapevine, TX


 

Download:

Click here to download the poster on Multimedia Astronomy Communications


I’ve already had some fantastic interactions at my poster on astro communication. Here are a few of the things that I’ve talked about:

What is the next step? Where do you go from here?

Well, aside from defending and (hopefully) completing my PhD, I would really love to see some sort of course on communicating astronomy integrated into graduate programs. Communication is integrally linked to astronomy research and an astronomy career. As such, graduate programs and graduate advisors should help their students develop those skills just like they help develop research skills. A course, workshop, or seminar on good communication practices (research papers, grant applications, research presentations, etc) would go a long way towards improving communication within our field.

What does it mean to tell a Simple, Concrete, and Credible story?

Simple, Concrete, and Credible are what I call “must haves” when telling your science story. This means that the main message of your story should be straightforward, unambiguous, and believable. This doesn’t mean that your results have to be definite and unambiguous; nevernevernever misrepresent your science. But, you 100% should ensure that you explain your result in an unambiguous way.

What is the biggest hurdle you had to overcome while working on this?

Granted, this one was asked by one of the poster judges, but it’s a good question nonetheless. The single BIGGEST issue here is that little to no research currently exists on communicating astronomy in an effective way. And I mean, astronomy specifically. There is some research (by some awesome people) on teaching astronomy, but while teaching is a specific form of communication, there are many other types that astronomers deal with that haven’t been addressed.

That being said, there is a lot of research that has been done in other fields, like engineering and medicine. Not to mention all of the communication theory research that has been done in the actual field of communications. The AstroComms poster lists some of the available resources that I hunted down. There are a lot more. The biggest hurdle I faced (and continue to face) is finding the appropriate reference material, figuring out which parts could translate to astronomy, and working out the best way to apply the work to astronomy-specific problems.

 


 

This page will soon be updated with comments, notes, and suggestions I received at AAS229. Stay tuned for more information!

 


Live-blogging afterthoughts

So, after a few weeks of respite from the marathon blogging session at the Emerging Researchers in Exoplanet Science Symposium hosted at PSU, I thought I would share some thoughts about my experience live-blogging and the comments that I got back about how the live-blogging went for me.

First off, I personally found the experience to be a lot of fun (a little bit of stress), but overall very rewarding for me for a number of reasons. For one, it was great practice at writing science concisely and with purpose, summarizing another scientist’s work, and interpreting it through the lens of a non-expert. This was a good exercise for me since I want to eventually find a career in science journalism. Secondly, it was a good way to make sure that I was paying attention to the conference, to actively listen to the presentations and to internalize the presentations. I certainly learned a lot more and paid attention a lot more than I have at other conferences, simply because it was my job to to do. But now, I think that I’d do something similar even it wasn’t my job because it’s so helpful. Thirdly, it was great to help out the conference participants and the people that weren’t able to participate, to talk to new people about the blog, and get the symposium more recognition on the cyber-waves. So, for those reasons, live-blogging the conference was a very rewarding experience for these and other reasons.

However, when I do this again, I might do some things differently. For ERES, I wrote a three-paragraph minimum for each of the 10 minute science talks given by participants, and following the Q&A of the panels and other featured talks. This was a lot of work. The blog for ERES was formatted more like conference proceedings than a blog, which I (and other OC members) felt was appropriate for this type of conference. Since the blog was coupled with a Twitter feed, the Twitter feed was set up more as the “thoughts and comments and clever paraphrases” format (thanks Natasha Batalha!), and the blog was set up to be more a scientific summary. When I do this again in the future, it is likely that I won’t have an accompanying Twitter feed, so I would probably do less summary and more paraphrasing, thoughts, and comments.

Also, typing that much in such a short time gave me hand cramps! It was hard work, and I might do it differently next time. But it was a really rewarding experience. If you have a proclivity for writing, or are trying to find a way to retain more information at a conference, I suggest live-blogging. It’s just like taking notes, but with more personality, and can be tons of fun!

So…yeah. Those are my thoughts on live-blogging. Now that the live-blogging is done and I have free time again, I will be continuing with semi-regularly submitted blog-posts and other goodies in the future. Thanks for tuning in!

-Kim Cartier, aka “AstroLady”


ERES Day 2 Session 4: Statistical Characterization

Our post-lunch session is chaired by recent PSU PhD, Dr. Benjamin Nelson. These talks are all related to characterization of exoplanet systems using statistical methods.


Systematics-insensitive periodic signal search with K2 (Ruth Angus, University of Oxford)

When the second reaction wheel of Kepler failed, it was recommissioned as the K2 mission. The issue with the K2 mission is that the telescope itself drifts slowly, and they need to fire the thrusters every so often to fix this issue. That mean that, as the star appear to be drifting across the CCD, the precision of K2 has decreased compared to its predecessor.

To compensate for this, they need to come up with a better analysis algorithm for the K2 lightcurves. This involves better modeling of the stellar systematics and convolving that with a sine wave over many many frequencies to create a systematics insensitive periodogram (SIP). The raw periodogram of K2 data shows a very large feature at the 6 hour thruster times along with aliases of that 6 hour frequency. When they redo the lightcurve periodogram with SIP they are able to remove that large systematic feature and pull out the red giant acoustic oscillations of the host stars.

Using SIP, you can also find a better estimation of the stellar rotation period, since the systematics won’t be clogging up the periodogram anymore. They were able to accurately recover the stellar rotation period once the systematics were removed from the lightcurves using SIP. They can also use this method to find other periodic signals like short period exoplanets, RR Lyrae, and eclipsing binary stars. Their code is available on GitHub (don’t use the tweated version!), and the paper recently came out on the arXiv.


*A Catalog of Transit Timing Posterior Distributions for all Kepler Planet Candidate Events* (Benjamin Montet, Caltech/Harvard)

Ben is working on a number of projects, including a transiting brown dwarf LHS-6343 and a paper on young M-dwarfs. Today he is talking about  transit timing variations (TTVs) (which Daniel Jontof-Hutter talked about yesterday). TTVs can ell us about eccentricities, inclinations, and mass ratios of planets in the same system, all of which can be really difficult to measure using another method.

When looking at TTV curves, the variations in transit timing usually follow a sinusoid, but not all points follow this trend. The current methods ignore non-Gaussian errors, assume white noise, ignore ill-fitting transits, short cadence data, and don’t marginalize over transit shape (if the transit is not properly sampled, current methods usually ignore these points). But correlated noise matters too, and needs to be included in analyses.

Posteriors can help with this. If you fit many transit model models and times of transits, infer the posterior distribution for the time of every transit observed with Kepler, you can use importance sampling to get a handle on correlated noise. Importance sampling can help speed up your computation process bu focusing your computation on places in your data that you know, a priori, that the transits will be occurring. They are currently working on all of the single-transit systems, and multiple systems are nearly ready for “prime time”. They are also looking for a cool name for the project, so give him a shout if you have an idea.


Towards a Galactic Distribution of Exoplanets (Matthew Penny, Ohio State University)

Where are the known exoplanets? Microlensing is the only technique we currently have for probing for exoplanets in multiple areas of the galaxy. RV surveys are limited to nearby stars, Kepler looked in one region, K2 will add a ring around the solar system with more nearby targets, but microlensing surveys look straight into the galactic center to find the frequency of exoplanets as a function of galactic radial position.

Question: does plane formation in the bulge differ from the disk of the galaxy? Different galactic environments can have detrimental effects on the longevity of a protoplanetary disk, or can change the temperature of the protoplanetary disk to impede planet formation. They find that, based on microlensing surveys, there are a lot fewer planets in the galactic bulge than in the disk. They determine this by varying the ratio of disk planet formation efficiency to bulge planet formation efficiency to model the current distance distribution of microlensing planets. They find in their first results that the bulge planet formation efficiency must be lower than the disk planet formation efficiency in order to approximate the microlensing planet distance distribution that they see.

They want to find out what is the most probable distance/location in the galaxy to find exoplanets. They can measure distances to microlensing planets with parallax (for nearby planets), using a Bayesian method, or using the relative proper motions of stars to calculate the distances. While there are still some kinks to work out, this mix of techniques lets them probe a wide range of planet distances and begin to map the galactic distribution of exoplanets.


Constraining the Demographics of Exoplanets Using Results from Multiple Detection Methods (Christian Clanton, Ohio State Univsersity)

There have so far been about 150 confirmed exoplanets around M-dwarf stars. Confirming these planets really takes a collaborative effort between multiple detection methods. M-dwarfs are good targets for exoplanets because they are the most numerous of stars in the galaxy, RV and microlensing surveys are also more sensitive to lower-mass stars.

There have been individual exoplanet censuses of M-dwarfs using separate methods. Some constrain the actual frequency of planets around these stars, other non-detections (direct imaging) place upper limits on this number. If they combine the results from these various techniques (microlensing + RV, and now direct imaging), they can confirm quite a few planets around M-dwarfs and get a constrain on long-period giant planets around these stars. They ask: is there a single planet population distribution that is consistent with all of these M-dwarf exoplanet surveys?

They map the distribution of planets into distributions of the observables relevant to each technique (microlensing+RV+direct imaging). They then determine the number of expected detections for each survey, and compare that with the actual reported results and determine a likelihood of that particular planet population, and repeat for a variety of planet populations. They can then constrain the planetary mass function and power law slope of this distribution very well for M-dwarfs. What this means is that the results of the microlensing, RV, and direct imaging surveys are consistent with a single planet population distribution. They also want to include the results from Kepler to add constraints from transit surveys as well.


Sifting Through the Noise – Recalculating the Frequency of Earth-Sized Planets Around Kepler-Stars (Ari Silburt, University of Toronto)

Kepler has been invaluable in attempting to answer the age-old question of: is our planet unique? Unfortunately, we haven’t yet found a true Earth-analog. We can estimate the frequency of Earth-like planet by extrapolating our results past our detection biases. We first have to overcome our geometric bias: only certain planetary systems are transiting, and there’s a large population of planets that we simply don’t see in transiting surveys because of this detection bias. This is a strong function of planetary radius and orbital semi-major axis.

This bias causes a lot of large error bars and false-positives in the Kepler data – mainly because we don’t understand the stars themselves. Such large error bars can skew our estimation of the number and frequency of Earth-sized planets. What they’ve done is a new way of accounting for the uncertainty in planetary radius by applying our known Kepler detection probabilities for planets based on their radii and combining that with the probability curve of the planet size. For example, the uncertainties of a detection may include a very small size, but we know that detecting something that small is very unlikely, so that value is downweighted. This allows them to correct our error distribution and use this to improve our estimate of the frequency of Earth-sized planets.

They find that with these corrections, the frequency of Earth-sized planets in the Kepler sample is eta_Earth = 6.4%, which is about half of what it would be if they haven’t accounted for the detection biases of Kepler. They anticipate that the Gaia spacecraft will help better understand the stellar exoplanet hosts, which will further increase the accuracy of their eta-Earth value.


A population-based Habitable Zone perspective (Andras Zsom, MIT)

Most people visualize a habitable zone as a stripe around a star that is capable of supporting liquid water. If you look at it in a population perspective, you can see which planets fall interior to the HZ and are covered in water vapor, those exterior to the HZ with ice on their surface (or like Mars that falls right on the ice/vapor limit), and those inside the HZ which can have liquid water.

From observations we have good estimates on the stellar properties and planetary orbital properties, but we don’t know much about the planet properties and surface climate. How can we know the surface climate without knowing the planetary atmosphere?  They describe the HZ as a probability function to estimate the occurrence rate of HZ planets based on this HZ probability. If you treat the stellar and planet properties as random variables, you can create probability density functions out of them. They then sample each variable and use a 1D climate model to calculate the surface climate, repeat this to create an ensemble of climates, and then study the habitable sub-population and calculate their probabilistic HZ.

They find that the most probable area of HZ planets around M-dwarfs occurs around a few times the radius of the Earth, and around 0.5-1 times the stellar flux received at Earth (author’s note: this is a really cool 2D HZ probability plot!). So, they find that the occurrence rate of HZ planets is 0.001-0.3 planet/star for M-dwarfs, but that the surface pressure and atmosphere type strongly impact the surface climate and occurrence rate. We need better estimate of the potential atmospheres of exoplanets. Their code is called HUNTER and is available on GitHub.


The session following the coffee break will be co-chaired by me (Kimberly), so that blog post will be written by Ben Nelson.


ERES Day 2 Session 2: Career Panel

This session is our “Alternate Career Panel,” where we’ve invited three speakers who have all completed an astronomy PhD and then chosen to enter a career outside of a large, research-based academic environment. Our three speakers will be providing their perspectives on careers in a smaller academic environment, industry, and science policy.

Our speakers are:

Eric Jensen (EJ): is a Professor of Astronomy at Swarthmore College. He holds a BA in Physics from Carleton College, and a PhD in Astronomy from the University of Wisconsin-Madison.

Josh Shiode (JS): completed his PhD research at the University of California, Berkeley.
He is currently the Senior Government Relations Officer at the AAAS. Josh was also the John N. Bahcall Public Policy Fellow of the AAS.

Daniel Angerhausen (DA): received his PhD from the German Sofia Institute and then from CalTech. He then moved on to a postdoc at RPI, and is now an NPP fellow at Goddard. He in the n the process of starting up a company that he will tell you more about.

Last minute substitution – Daniel Angerhausen (DA): has graciously agreed to join the panel at the last minute to fill-in for Dave  Spiegel, who was not able to join us.

As I (Kimberly) was the moderator for this panel, I will direct you to the blog post that was written by Robert Morehead, which is posted on the ERES Blog Page


ERES Day 2 Session 1: Exoplanet Instrumentation

Good morning everyone and welcome back to the ERES 2015 blog. Our first session this morning in on instrumentation related to exoplanet observations, and is chaired by Yale graduate student, Joseph Schmitt.


The Habitable-zone Planet Finder Instrument: Pushing the Limits of Exoplanet Detection in the Near-Infrared (Sam Halverson, PSU)

The Habitable Zone Planet finder is am infrared doppler spectrograph with a 1 m/s precision to find habitable zone planets around M-dwarfs. The HPF team is a very large team spanning multiple universities, and multiple departments at PSU.

Why do we care about radial velocities in the near-infrared? The majority of nearby stars (from the RECONS) survey are M-dwarf stars, which primarily emit light in the NIR. There is a high frequency of planets around these M-dwarfs, almost half of them have at least one planet, and this is a population that is largely untapped by telescopes like *Kepler.* A habitable zone planet around one of these stars would induce an RV signal on the order of meters/second, so this instrument will be ideally placed to measure these RVs in the NIR.

The HPF will be mounted on the Hobby-Eberly Telescope (HET) at McDonald Observatory, of which PSU is a partner. The HPF borrows some of the success of HARPS in its design. It is a fiber-fed spectrograph, with science fibers and calibration fibers. It uses a HgCdTe absorbing detector, not a CCD. The entire spectrograph is contained within a cryostat chamber, similar to the APOGEE instrument. To achieve 1 m/s precision in the NIR, they plan to utilize a laser-frequency comb, but are also looking into a Fabry-Perot etalon for a frequency calibrator, which may be of use to the wider astronomical community rather than tailor-made for the HPF observations. The HPF is exploring a new area of exoplanet detection, piggybacking on the successes of previous instruments like HARPS and APOGEE in its design, and developing cutting edge solutions to the complex problem of high-precision RVs in the NIR.


Ultra Precise Environmental Control for High Precision Radial Velocity Measurements (Gudmundur Stefansson, PSU)

The search for habitable planets is exciting! (author’s note: indeed!) Improved radial velocity precision enables us to detect lower mass planets, and HPF will focus on HZ planets around M-dwarfs. M-dwarfs are currently our best bet to look for rocky, low-mass planets in the HZ. NIR detector are better suited than optical detectors to study rocky planets around M-dwarfs. HPF is aiming for the same precision as HARPS, but in the NIR instead of the optical.

HPF will push the boundaries of temperature and pressure stabilities achieved by HARPS. Temperature changes cause the echelle groove density to change, which degrades the precision. This can be on the over of 60 cm/s at a 10 mK change in temperature. Their are aiming for a temperature stability of 1 < mK precision in their cryostat. Environmental control is essential to reach 1 m/s RV precision in the NIR.

The HPF environmental control system opens the path to their 1 m/s goal precision. The components of the environmental control system are largely constructed and fabricated by PSU graduate students. Their actively controlled heaters keep HPF at 180 K with mK stability. They are currently testing and demonstrating the stabilizing effects of their thermal enclosure by testing things at HET. Right now HPF is in mid-integration phase in New York, and they plan to have the integration phase done in a  month or so, whereupon they will proceed to ship the instrument to PSU for further testing.


Improve RV Precision through Better Spectral Modeling and Better Reference Spectra (Sharon Xuesong Wang, PSU)

Detecting Earth is hard, especially in RV. the RV jitter in Keck’s HIRES spectrograph for Kepler 78 is ~2 m/s. Their goal is to accurately model the stellar spectrum, and compare them to empirically derived reference spectrum. They then apply a “best guess” RV, convolve the model stellar spectrum with the instrumental PSF, and iterate until they find the best RV needed to match the reference spectrum. This reveals the radial velocity signal within the stellar spectrum, which allows them to detect planets.

There are a number of things that can confuse this straightforward process. Firstly, barycentric correction terms (see Wright & Eastman 2014 for more details on this). But if you are detecting things from the ground, you may be detecting spectral lines that are not from the star itself, but telluric lines from the Earth’s atmosphere instead.  The telluric lines won’t show the same radial velocity as the stellar lines, which can mess up an RV signal. You need to add telluric lines into your model, or completely mask out regions of telluric contamination, in order to get rid of this. But, there are also *micro-telluric* all over the visible and IR spectrum which cannot be masked out, so you *need* to accurately model the tellurics in order to improve your precision.

Now, they also test the reference spectra of I2 cells at PSU’s Hobby-Eberly Telescope, and found that when they tested the I2 cells recently, the reference spectra were different than what they were 20 years ago. This should not be! So they tested again, and found that the newer Fourier transform spectrum of the I2 cell appears to be more accurate than the older one. In the future, they plan to take the improved telluric calibrations and the improved I2 reference spectra to improve the codes used to calculate RVs. They want a Python/GitHub/Bayesian RV code to be implemented, which will improve the precision and accuracy of ground based. RV measurements all around.


First exoplanet transit observations with SOFIA (Daniel Angerhausen, NASA Goddard)

Spectrophotometry in 30 seconds: sometimes we are lucky to observe edge on transits, but usually we are looking at grazing or secondary transits which are more difficult to characterize. Spectrophotometry looks at the transit light curve in many many wavelengths and then compiles that into a spectrum: so spectro– because they are creating a spectrum, and photometry because they are creating these spectra from photometric measurements of transits rather than a traditional spectrograph. This can tell them about the atmospheric composition, and atmospheric structure of HJs.

SOFIA is a telescope on a plane, a Boeing 747-SP aircraft that flies higher than commercial aircraft. It’s a good compromise between a ground-based telescope and a space-based telescope: they remove some of the atmosphere (99%) that plagues ground based observations, but they can’t observe as often as the ground because of flight restrictions.
It operates in the NIR (0.3 micron to 1.6 mm). It has a wide wavelength regime and is mobile, which is good all for transit observations. “SOFIA is a space telescope that comes home every day” which lets them continually update the instrumentation on the telescope, something you can’t do with space-based telescopes. This means that SOFIA will always have the cutting-edge detection methods (provided that funding exists).

SOFIA had its first exoplanet observation in October 2013 with FLIPO, planet HD189733b, and achieved “space-based” quality of 185/160 ppm precision. As that was the first observation, they expect that the precision and accuracy of their instruments will only improve as they gain further understanding of them. They are currently working on GJ-1214b transit observations. Even when JWST goes up, people will still need alternatives for transit observations, and SOFIA is the perfect not-quite-space telescope.


Suborbital Demonstrations of Starshades(Anthony Harness)

“The firefly and the lighthouse”: an Earth-like planet is 10^10 times fainter than the host star and only 0.1 arcseconds away. This is comparable to trying to detect the light from a firefly that is flying in front of a lighthouse. A starshade is a way to mask out the light from the star and only detect the light from the planet. The benefit to this is that all of the light-masking is taking place outside of you telescope, so if you want full-light measurements (like from a spectrograph) at the same time you can do both at once.

The community needs to do end-to-end system level tests of starshades so that we can prove that it works and and gain confidence amongst the community that starshades are worth it before we spend a lot of time and money making them. The best way to do this is to do real tests with real data on a smaller ground telescope to make a proof-of-concept.

They wanted to try a zepplin – but alas, no such luck. They next moved to a vertical-takeoff, vertical-landing rocket that can hover and be used as a starshade platform for a ground telescope. They want to ensure that the starshade has cm accuracy and stability – if light keeps leaking around the edges, the measurements are ruined. They plan to use two small telescopes: one for measurements and one as a guiding telescope to make sure that the science telescope is still pointed at the star. Rockets are still a bit far off, however, so their first attempts will be a simple stationary starshade on a tall peak that can be angled to follow the star’s path, and make use of a somewhat mobile telescope. They plan to attempt the stationary method this summer (2015), and their ultimate goal is to have the telescope 3km away from the starshade and detect the disk around Fomalhaut. Their initial tests have been able to detect a “planet” at a 10^-8 contrast to its “star”.


Multiband nulling coronography (Brian A. Hicks)

“Nuller” – nulling coronograph. This has similar results to a starshade, in that the starlight is “removed” from the image. Instead of directly blocking the light, a nuller works by using destructive interference of the starlight to detect fainter light that light that is also in the images. In order to detect a Jupiter around the Sun, you need 10^-8 contrast, and for Earth you need 10^-10 contrast.  This is a direct-imaging technique to allow you to get down to these precision levels. HZ exo-Earths require very specific inclinations for transits (they essentially must be within a few degrees of edge-on for us to see them transiting), but direct imaging favors “face-on” planet systems rather than edge-on transit systems, so they can probe an entirely different population of planetary systems and reduce our current detection biases.

Direct imaging, because it favors face-on, means that they could observe a planet throughout its “seasons,” look at variations in the planetary albedo (reflectivity) over time, and possibly look at the effects of weather patters on exoplanets. If they broaden the search away from “habitable,” they could even talk about the “infestible zone,” climates where extremiphiles could live. Spectroscopy of directly imaged planets would require a large telescope, and they want to get the spectrum in a large wavelength-range.

In addition to planets, they want to detect debris disks and protoplanetary disks, and observe the evolution of planetary systems through many stages (protoplanetary, planetary, and debris). They want to design a coronograph that could work with a future space-based telescope like JWST and has capabilities in UV, visible, and IR wavelengths. Exo-C and Exo-S are potential future “Exo-coronograph” and “Exo-starshade” missions, both aimed at direct imaging of planets.


Time for coffee! And then on to the panel on alternate career paths, moderated by yours truly.


ERES Session 6: Multiplanet Systems

Our final talk session of the day is on Multiple Planet Systems, and is chaired by PSU Postdoc Thomas Beatty.


Precise Planetary Masses, Radii and Orbital Eccentricities of Sub-Neptunes from Transit Timing (Daniel Jontof-Hutter, PSU)

Kepler‘s period-radius diagram shows us that sub-Neptune planets are really common, which is interesting because we don’t see any of those planets in our own solar system. We have been able to characterize a few planets using RV as well, and some with transit timing variations. With TTVs we are looking at the very slight variations in a planetary orbital period due to slight gravitational tugs on the planet from other planets in the system.

To characterize TTV transiting planets (Kepler-79 in particular), they can assume that all of the planets are co-orbital (they have the same inclinations). They also assume that there are no non-transiting perturbing planets, since the timing variations closely match what they’d expect for that system with just the transiting planets. They can use TTVs to get the masses of these planets and very good constraints on their eccentricities. This is remarkable because the eccentricities are much lower than would be detectable using RV measurements (RV doesn’t give accurate eccentricities below ~0.1). These eccentricities are 0.2% to 2%, and are still detectable.

TTVs also allow them to characterize the star Kepler-79 very well, with only 2% errors. Knowing the star really well, they can characterize the planets very well. Planet d in the system has a super-Earth mass, but a density of only 0.1 g/cm^3, comparable to atmospherics density on Earth’s surface! This means that the planet must have a much larger radius. When we compare the sample of planets characterized by RV to those by TTV, we see that TTV is looking at a very different sample of planets. Each method has their own biases, so they are finding different types of planets. With TTVs we can find super-Earth massed planets at up to 200 day orbital period, but just because they are super-Earth mass, doesn’t mean that they are rocky!  They can have a huge range of bulk densities.


On the Origin and Evolution of the Kepler-36 System (Thomas Rimlinger, UMD)

Kepler-36 is a Sun-like star with two planets in a 7:6 mean-motion orbital resonance, one a super-Earth and one a sub-Neptune. This is a very unusual orbital configuration: one is high density, one is low density; they are very tightly packed; this resonance is very rare. How did this happen? Most planet formation models can’t do this.

Theory: protoplanets form far out and migrate inwards, are bombarded by Mars-sized embryos, one gets its mantle stripped, one gets mass accreted. However, few simulations result in a 7:6 resonance. This makes this particular method very unlikely, because there are a few serendipitous things that must happen to make this system via this method.

Their method: take this theory above, and modify it. Their method would not require mantle stripping, originally starting in the strange 7:6 resonance, the planets don’t have to swap places. They would start in the outer parts in a 2:1 resonance, and migrate inwards. The inner planet then just sweeps up leftover rocky material to become the super-Earth, and the outer one is left as a lower density material. They modeled this in a simulation and were able to accurately replicate this system with minimal fine-tuning of the model.


Spacing of Kepler Planets: Sculpting by Dynamical Instability (Bonan (Michael) Pu, University of Toronto)

What can the orbits of multiplanet systems tell us about their formation? There are some systems, like Kepler-11, that have many many planets packed into a very small space. These systems are also so-called “dynamically cold,” with low eccentricities, no real variations, similar inclinations. Looking at the distributions of Kepler multiplanet systems, we see two distinct families of multiplanet systems: those with many planets that are dynamically “cold” and those with fewer planets that are dynamically “hot”, that have the freedom to have large eccentricities, inclinations, or more widely spaced orbits.

Are these many planet, cold systems really stable over long times? They simulated a Kepler-11 type system – planets that are all super-Earths and tightly spaced – and dynamically ran the system to see how long something like that could survive. At planetary spacing of about 11 times the Hill Radius, all of the simulation runs survived for the full simulation time (1 billion years). At that spacing, they found that inclined and eccentric orbits destabilized the systems within 1 million years. You would need to increase the spacing even more to stabilize those systems.

They conclude that some of the Kepler multiplanet systems are at the edge of stability, and so they must have been “sculpted” over eons. There must have once been many more multiplanet systems that developed in unstable formations and dynamically evolved to their lower-number planet states.


Implications for the False-Positive Rate in Kepler Planet Systems From Transit Duration Ratios (Robert C. Morehead, PSU)

This talk only applies to the multiple planet systems detected by Kepler. As a reminder, Kepler has very low-resolution CCDs: each pixel is 4 arcseconds wide. And so there is a lot of room in the Kepler photometry for false-positive, blend scenarios, and binary star systems. When we look at these stars at higher resolution we can find out more about them, but we can’t do that for everything.

The ratio of transit duration can probe whether two planets orbit the same star. This is especially useful for systems where we know there is more than one star, or when we suspect that there is a blending scenario going on. The orbit’s eccentricity and the impact parameter affect transit duration ratio. We expect mostly that these systems have co-planar planets, since they are all transiting. They use simulations to calculate the likelihood of the observed duration ratio under different scenarios: all around one star, and a suite of false-positive scenarios.

They find that most multiplanet systems have a high probability of being associated with the same star. Now, the problem with this is that the parameters used here are the original Kepler stellar parameters, ignoring any followup observations made. So, while they conclude that most multis are likely around the same star, there is always the chance that there is a blended source and therefore a more complicated system than one might originally think.


This concludes our science talks for the day. After this we have another poster pop session and our poster session, followed by dinner.


ERES Session 5: Poster Pops

A Poster Pop is a sixty second advertisement for a poster. You get one slide to supplement your presentation, and the goal is to attract people to your poster. Poster pops are a challenge, since you have to squeeze in your message in a short time. It’s a good time to practice your “elevator pitch”: describe your work to someone who doesn’t already know what you’re doing, and do it effectively in a short time (as if you only have the length of an elevator ride). The trick here is that you’re not talking about everything that you’re doing, but rather just on what your poster is presenting.

I myself am presenting both a Poster Pop and a poster. My poster pop is in the later session, so this poster pop presentation post is only related to the early session. Keep in mind, these are my own opinions about what makes an effective poster pop presentation. If you disagree, I encourage discussion!


Alright, now after the poster pops are done, here are some of my thoughts about what makes an effective poster pop:
1. The slide:
– Do: make your figure easy to read
– Do: summarize the takeaway message of you poster
– Do: make your slide somehow related to what you’re saying. If I looked at your slide long enough, would I be able to figure out what’s going on?
– Do: make sure to credit all of your co-authors on the paper
– Do: make sure that the text and background are highly contrasted for easy reading
– Don’t: give away the *all* of the milk for free. Leave them a reason to go to your poster.
– Don’t: make your slide completely un-readable.<br />

2. The content of your pitch:
– Do: say what is unique about your poster. Why should I go to yours over someone else’s?
– Do: tell us where to find your poster
– Do: be excited and speak clearly!
– Do: have good timing! Don’t go over time!
– Do: make what you’re saying related to what’s on your slide.
– Do: make sure that you have a clear beginning, middle, and end.
– Don’t: say the same thing you’re saying in your oral pitch of your poster. This has a different purpose.
– Don’t: just read what you poster says.
– Don’t: stare down at your notes the whole time.
– Don’t: write your whole script out. Be flexible!
– Don’t: make fun of someone else’s work


Phew! That is a lot to fit in to a 60 second pitch. Granted, for our poster pops we have 2 minutes, but that’s still pretty tight.Now that I’ve seen some of these poster pops, I will be well prepared (hopefully!) for my own poster-pop later this afternoon.


ERES Session 4: Planetary Atmospheres 2

This is our second of two sessions on Planetary Atmospheres, chaired by Cornell University Research Associate Ramses Ramirez.


The Pale Orange Dot: The Climactic and Spectral Effects of Haze in Archaen Earth’s Atmosphere (Giada Arney, University of Washington)

Giada is from the University of Washington Astronomy and Astrobiology Program. While we want to know about the habitability of distant exoplanet, Earth will always be the best-studied habitable planet. So, we want to study the habitability of Earth throughout its history. She studies the Earth through the Archean period. During this period (~3.8-3.5 billion years ago), life first developed. We had a lot of methanotrophes (methane-eating bacteria), which were prolific because methane was much more abundant than oxygen in the atmosphere. We can look at Saturn’s moon, Titan, to get a modern-day example of a methane-rich atmosphere. We think that the Archean Earth had an orange atmosphere with a methane haze, like Titan does. Since we think that the Earth at one point was hazy, this is a good phenomenon to study to understand potentially habitable worlds.

What would the climate be like on a hazy Earth-like world? As you increase the amount of methane in the atmosphere, you are increasing the methane haze. At around 30% methane the haze starts to cool the planet by shielding the sunlight, but at some point the cooling bottoms out. Conclusion: hazy worlds can be habitable. In the right conditions, it can work like a reverse greenhouse effect, cooling the planet to make it habitable.

How can we detect hazy atmospheres? Methane haze absorbs a lot of light in blue wavelengths, so objects that are missing portions of blue light in their reflection spectra are likely to be hazy. For transit transmission spectra, when you add haze into the atmosphere you can’t see as deeply into the atmosphere, so your characteristic absorption features are more muted than normal. The spectrum at the ground of a hazy planet shows that the majority of the harmful UV light is blocked, so a hazy planet might have a greater chance at habitability, since the harmful types of light are reduced. As the haze on the early Earth was biologically produced and regulated, they might even be a signature of life.


The robustness of using near-UV observations to detect and study exoplanet magnetic fields (Jake Turner, UVA)

The magnetic fields of planets give us insight into the internal structures and rotation period of exoplanets, atmospheric dynamics, formation and evolution of exoplanets, potential exomoons, habitability, and allow us to compare to solar system objects. Their method involved detecting asymmetries in the near-UV and infrared light curves. In the near-UV, you can detect the bow shock in front of the planet, like a boat going through water. This light curve should have an extended ingress and a shortened egress.

They used the Kuiper Telescope in the near-UV on 15 targets looking for exoplanet magnetic fields. On WASP-77b, they note that the transit does not look like their predicted bow-shocked magnetic field. In their 15 planets, they did not see any asymmetric transit shapes, which puts an upper limit on the potential magnetic fields of those planets. So, either those magnetic fields are really small, or perhaps this effect is not observable using that particular telescope or in that particular wavelength.

They use CLOUDY to simulate the ionization, chemical, and thermal states of the bow shock to see what the simulations say about their ability to detect the asymmetry. They find that with this, there are no species in the near-UV to cause an asymmetric transit, so their non-detections are due to their observing parameters, not a physical property of the planets. They find that near-UV transits are not robust for detecting magnetic fields, and near-UV planetary radii show variations that can be used to constrain their atmospheres.


Characterizing Transiting Exoplanet Atmospheres with Gemini/GMOS: First Results (Catherine Huitson, University of Colorado)

Main aims of Gemini/GMOS is to measure the dominant atmospheric absorbers in exoplanet atmospheres. They have broad-band, low resolution optical coverage. Their 9 planet sample has low densities and good comparison star, is comparative study , and want to understand the systematic noise sources. The survey length is 3 years, which lets them improve signal/noise and increase repeatability. With GMOS, they can get similar precision to HST, but with fewer gaps in the data which allows for better fitting of the transit curve and more accurate planetary and stellar parameters.

MOS = Multi-object spectroscopy. The two spectra are the target and the reference star of the same spectral type, so that they can compare the two stars one wavelength at a time. They get a frame every 50 seconds to build a transit light curve, and as they go wavelength-by-wavelength they can see changes in the transit light curves with wavelength, and so build a transmission spectrum. Using this method they find that WASP-4b is a cloud dominated hot Jupiter.

There are a number of observational challenges that they face during their analysis, and they are finding clever ways of solving each and every problem that arises. They can find through this method that XO-2b is a cloud-free hot Jupiter. Their first important results is that while WASP-4b and XO-2b are very similar planets in some respects, they have very different atmospheric structures, and they can detect that.


Hot and Heavy: Transiting Brown Dwarfs (Thomas Beatty, PSU)

Interesting presentation technique: start with your conclusions!

Conclusion 1: The brown dwarf desert may have an oasis.
Conclusion 2: transiting brown dwarfs provide links between hot Jupiters and field brown dwarfs, allowing us to use observations of one to understand the other (KELT-1b in particular)

Our understanding of the brown dwarf desert has evolved over the past 10 years or. As of last year, we have found 7 BD companions in this region, all around F stars (~6250 K) which are more rapidly rotating than the Sun. KELT, unlike other transiting surveys, don’t ignore F stars, which have sort of been ignored before because RV detections are difficult. But now we see that F stars may have an oasis in the BD desert.

The atmospheres of planets and dwarfs behave differently. There’s a very distinct “kink” in the color-magnitude diagram of BDs at the L/T transition (where methane becomes dominant) that doesn’t exist for planets of the same temperature. BDs have a very tight color-temperature sequence, and HJs are much more scattered. The different behavior tells us how the atmospheres are behaving, particularly with regards to carbon monoxide and methane. People postulate that the methane of HJs shouldn’t be there, because the species is highly irradiated by the star, which it wouldn’t be for BDs.

Well…KELT-1b is a highly irradiated BD in a tight orbit around its primary star. The day side of KELT-1b looks just like a field BD, a late-M or early L dwarf. They want to look at the night side of the BD to see if there’s some chemical gradient between the day and night sides. If so…well, that would be very interesting indeed and tell us about the L/T transition for irradiated BDs, and how that impacts HJs and directly imaged planets.


We now move on to the first of our poster-pop sessions. I will do my best to capture some of the dos and don’t of how to give a poster pop once I have some examples of them to work with!


ERES Session 3: Fellowship and Grant Writing Panel

The first of the panel discussions is about how to write effective fellowship and grant applications. Members of the panel have all applied for, and won, various fellowships. They will be talking about what makes an application effective, important things to think about, and other tips and tricks learned through experience.

The slides from this session will be posted to the ERES website soon.

Our panelists are:

James Owen, Hubble Fellow (JO)
Laura Kreidberg, NSF Graduate Research Fellow (LK)
Brian Hicks, NASA Postdoctoral Program Fellow (BH)
Daniel Forman-Mackey, Sagan Fellow (DFM)

JO: The panelists are starting this session with a short presentation describing key points, specifics for each of their respective fellowships, good proposal writing, anonymous advice from selection panelists, and a Q&A. Participants can grill them further at lunch.

LK: Grad student fellowships are useful, super useful. There are no downsides. Guaranteed funding, no need for TA work if you don’t want them. Also good practice for writing more proposals in the future.

The NSF GRFP is open for senior undergraduates, and first and second year graduate students. Apply all three years, even if you don’t have something your senior year! The application is pretty hefty, so start early and take your time. Two main criteria: intellectual merit (how good is the science?) and broader impacts (why is the useful to others?). Broader impacts can be presentations, conferences, public outreach, STEM mentoring, volunteering, tutoring, etc. Winning an NSF GRFP makes you eligible for the NSF GROW, which allows you to continue your research in a foreign country.

BH: A fellowship program versus a regular postdoc position. Pro: you set your own research program, you control your research budget, more ‘prestigious’. Cons: you’re on your own (potentially no supervisor). For ‘open’ fellowships, you can take the fellowship anywhere, while an ‘institutional’ fellowship is directly associated with a particular place that you then have to work at. There are open fellowships available around the world.

Statistics: ~300 new PhDs per year, ~100 fellowships available per year.

For the NPP, there are multiple application periods per year, and there are around 200 fellows in residence at any one time. There is a good stipend, benefits, and lasts for 2-3 years (the last one is funding dependent).

Advice: communicate directly with the adviser for the research opportunity before writing the proposal. Read the requirements carefully before you begin.

Go to the NASA Postdoc Website for a list of available positions.

DFM:Talking about the Sagan, Hubble, and Einstein fellowships, since they are pretty similar. Sagan is specifically for exoplanets, the Hubble is for anything, and Einstein is more for cosmology/extragalactic. Duration is up to 3 years, good benefits, good research travel budget, good stipend.

Must propose 3 institutions on your applications, institutions can only accept one of each per year. The success rate is about 1:17.

JO: General proposal advice: keep things clean and concise, don’t list too many “in prep” papers, start early and take your time. Know your audience and tailor specifically, do not submit the same one multiple times. DO NOT BREAK OR BEND THE RULES.

A good proposal will explain why your idea is relevant, what is your idea and how you will do it, and why you specifically are the right person to do this project. Also, make sure that your idea is achievable on a reasonable timescale. Why is your proposed institution the right one for the project?

Proposals are not academic papers! They are advertisements for your project and for you. Make your proposal stand out, as reviewers read thousands of pages per season. Get feedback (early) from people both in and outside of your field.

Anonymous advice solicited from reviewers:
1. promise something new, not more of the same
2. don’t make the panel angry. Don’t say how awesome you are, don’t use too many acronyms, make the proposal easy to read, follow the rules.
3. have diverse letter writers. An observer, a theorist, and if possible someone outside of your university.
4. “At the very least, the proposal should not be irritating!”

And now the Q&A portion:

(note: I wasn’t able to actually see the panelists as they were answering questions, so I apologize to the panelists if I attributed one of their comments to someone else.)

Q: Why to ask a paper reviewer to write your letter? What will they bring?
A: JO: The context and scientific relevance of your work

Q: Are there any fellowships that aren’t only for US citizens.
A: LK and DFK: yes, Hubble and Sagan, and some others. Look carefully at the requirements

Q: How do you decide whether or not you should have a direct supervisor for your project or be your own boss? So, postdoc or fellowship?
A: DFM and JO: It is mostly dependent on how confident you are in being your own boss and what your personal preference for work environment is. If you have a good independent project and don’t need firm structure to work, then a fellowship would work. You could also apply for a fellowship under an advisor’s project (“I want to take my fellowship and work on this project of yours. What do you think?”). If you like working more in a larger group, then perhaps a postdoc would be better for you.

Q: What else should you include in your proposal?
A: Audience member who is also a Sagan Fellow: make sure that you talk about successful presentations you have had, AAS or the like. Show that you can communicate your work effectively. When you lay out your project, be specific as to how you will accomplish your goals. Most Hubble and Sagan fellowships don’t go to people right out of grad school; they mostly go to people who already have one or more postdocs under their belts. The extra postdoc first shows your additional experience.

Q: Thoughts on resubmitting the same project with some modifications to make it better?
A: Audience member who is also an NPP: you can do that for sure, take a close look at the comments from reviewers that you get back and you can iterate over the reviews until it works. If the comments look good, it might just be that there was no funding for you that cycle on you’re on the waiting list. Keep trying!

Q: The NPP proposal is significantly longer than Hubble or Sagan. How does that change your writing style and focus?
A: BH: It’s about 15 pages, which is about the length of a research paper. You don’t need to change your focus, but you can elaborate more on points that you have to be concise on in your other proposals. You could also add sections, provided that they don’t confuse your proposal.

Audience comment: Europe is nice. There is also a higher success rate (~1:4) than most US fellowships and the salaries are competitive.

Audience comment (NPP winner, also NSF GRFP and NSF GROW winner): The NSF Postdoc Fellowship application is larger than the NPP application, is due soonest, and can serve as a “first draft” or first attempt at an NPP. You can even maybe get comments back on that proposal before you have to submit your NPP and get more feedback.

Q:  For open fellowship, is it a bad idea to choose as your first choice your PhD institution?
A: JO: the anonymous feedback was split. If you choose it, have a really good reason why you pick that. Personal reasons (like two-body problems) are indeed good reasons. Panel members are people too, and some institutions will even break the “one fellow” rule for a personal reason. If the main reason for an institution is a personal reason, go ahead and put that in your proposal directly. Lame reasons just look lame. Of course, you always need a really good reason for any of your institutions.

Q:  How does having a postdoc help or hurt one’s chances at a position in industry?
A: (question was put off until tomorrow career panel, so this is my general impression): It probably doesn’t hurt to gain more experience that can transfer over. You can gain skills during this time that may be attractive to an industry company. You can make the switch at any time, don’t be intimidated.

Q: Time management? How do you balance everything, when you have dozens of applications?
A: JO: Very carefully. Make a clear schedule for yourself, and realize that you need a good solid few months of time to get everything done, and you probably won’t be getting much research done at the same time. Start thinking about your projects early, and talk to professors about it before you start writing.

This session was a lot of fun. There were a lot of fellowship and grant winners in the audience who shared their multi and varied experiences in applying for and winning grants. Great audience participation!

Now, it’s time for lunch, where we will be continuing the discussions on applying for and winning fellowships.


ERES Session 1: Stellar Characterization Talks

The first session of participant talks is chaired by PSU graduate student Taran Esplin and will focus on characterization of planet hosting stars.


Accessing the fundamental properties of young stars (Ian Czekala, Harvard Smithsonian CfA, @iczekala)

Talking abuot two techniques for measuring young stars and their protoplanetary disks. What are stellar properties of near solar mass stars before they hit the main sequence? How do we find this out? Stars start out above and to the right of the MS, and stars of different masses take different paths and take different amounts of time to travel from their initial positions to the MS. Lower mass stars take the longest time to hit the MS from their initial positions.

Technique 1: protoplanetary disk radio intereferometers. 3D structure model to get temperature, density, and velocity as a function of stellar mass. Then imaging across a CO line reveals the *kinematic fingerprint* of the star. This can dynamically weigh single stars in their “teenage” years.

Technique 2: using stellar spectroscopy to get the stellar mass. Get a spectrum of a star and you can usually get pretty good info on the effective stellar temperature and stellar radius. But, right now we only really use parts of the spectrum that we are very familiar with. What happens if we can fit an entire large chunk of the spectrum? With a  more complex spectral model to use in fitting we need to be more careful with the statistical methods we use to guarantee a good fit (need to be more careful than a simple chi^2!). Essentially, do the residuals between your model and your data resemble white noise? If not, you may need to query a covariant noise matrix to model your noise residuals more carefully.

In the future, we can combine dynamical masses from ALMA/SMA. The current sample is about 20 stars, and they hope to calibrate the early HR diagram.


Defining the Range of Chemistry for Exoplanet Interiors (John M. Brewer, Yale)

While we know of a lot of exoplanets, there are very few of them where we know their masses accurately. The problem is actually getting an accurate stellar surface gravity, which affects estimates of radius and temperature, since the parameters are degenerate.

They use Spectroscopy Made Easy (SME) to get a better handle on surface log(g) of the planet. They use over 7000 lines to get this better estimate. By comparing to stars that have asteroseismic surface gravities they can test whether their SME procedure works better than their previous procedure. SME does a substantially better jobs, excepting a few stars that are rapid rotatrs, which mess up the comparison.

There doesn’t seem to be any trends in derived log(g) with other parameters, meaning that the SME method works well across a wide range of stars (excepting rapid rotating stars). However, they want to now use this to get stellar compositions, and they do this using asteroid spectra to test the accuracy of the methods (this assumes that we know the composition of the Sun).

For asteroids, they test their method using the ratio of carbon to oxygen (C/O), since that ratio is highly dependent on initial Solar composition. In their stellar sample, they find very few high C/O ratio stars (very few diamond planets, boo).

Exoplanet properties require accurate knowledge of the host stars, and the stellar log(g) is a key parameter to characterizing the star. Look out for a catalog to be published with all these parameters soon


Broadening our Horizons on Short-Period Stellar and Substellar Companions with APOGEE (Nicholas Troup, UVA)

Serendipitous science from APOGEE. About 1/2 of stars are in binary (or higher order systems) and on average every star has 1 planet. Companions can be stellar binaries, planets, or brown dwarfs. Brown dwarfs are the “missing link” between stars and planets. A strange phenomenon is the “brown dwarf desert”, which means that there is a lack of BD companions within 5 AU of a solar-type star. This is strange, because we have lots of Hot Jupiter planets very close to their stars and at near-BD masses.

APOGEE was meant as a galactic structure survey, but has been very useful for exoplanet discovery and characterization. The APOGEE RV Companion Survey takes stellar spectra and can pull out stellar parameters and abundances in the APOGEE samples. Their spectral model is exceptionally good at fitting their APOGEE spectra to get stellar properties. They can then measure radial velocities and search for planets. They have to go through a rigorous false positive analysis to weed out the “not planet planetary-like signals” from the real planets.

About half of their stellar sample is giant stars, which really haven’t been searched for companions all that much. They have a galactic distribution of stars, looking both inwards and outwards from the galactic core. While their sample is mainly in the thin disk, there are a few thick disk and halo stars, as well as globular cluster and open cluster stars. After first analysis they have about half of their sample as stellar or BD companions, and the other half are potentially planetary companions. Using their methods they hope to “map the shores” of the BD desert, and build a galactic map of companion frequency.


Re-characterization of a gravity-darkened and precessing planetary system PTFO 8-8695 (Shoya Kamiaka, University of Tokyo)

This system is a T-Tauri star + hot Jupiter. The transits that were observed in 2009 and then in 2010 don’t have the same shapes, and they are trying to figure out why that is.

The star is rotationally deformed – that is, it’s rotating so fast that it more closely resembles a football than a soccer ball, and the central band of the star is gravity darkened, while the poles are brightened. There is also a precession of the orbital axis angle relative to the stellar spin axis, called “nodal precession”. Together, these two phenomena can explain the time-variable transit lightcurves. Since this system fits this scheme so well, it’s an ideal benchmark case for this model. Previous work in this area does not favor such a synchronized state (the two components of the model varying in a synchronized way) with the serious misalignment found in PTFO 8-8695.

Their method of characterizing the synchronicity of the nodal precession and the gravity darkening is expected to unveil the properties of younger or hotter stars, which are known to be more rapidly rotating than older or cooler ones.


Next session is the first of our Planetary Atmospheres talks. Stay tuned!


ERES Day 2 Session 3: Impostor Syndrome

A very critical issue for many (most) early career scientists is “impostor syndrome” (IS), which is defined as “an internal experience of intellectual phoniness despite external indications of success” (Clance P, Imes S. The impostor phenomenon in high achieving women: dynamics and therapeutic intervention. Psychotherapy: Theory, Research and Practice. 1978;15:241-247). PSU Astronomy professor, Jason T. Wright, is speaking about impostor syndrome, how it applies to you, and the effects and symptoms of IS.


We asked Jason to give this talk, and at first he seemed a bit surprised, and perhaps felt unqualified to give this talk. But, perhaps, this is the hallmark of IS. Jason agreed, however, did a bunch of research, and now feels ready to give this talk to us all.

Who is this IS talk for? Those who suffer from IS are often unaware that they do, and so those people need to be educated so that they can overcome it. If you have students, you should also know about this, since your students and advisees may have IS.

IS was first quantified in 1978 as “impostor phenomenon” and only applied it to successful women, those who, despite numerous accomplishments in their field, persisted in thinking that they weren’t skilled enough for the job that they have and were only fooling their peers into thinking that they belonged.

Jason anonymously surveyed ERES participants about their own thoughts and experiences with IS before the conference. His results show that most participants believed that around 70% of their peers have been affected, at least mildly, by IS (they included themselves in that percentage). This is a persistent and widespread phenomenon, and we need to educated ourselves about this more.

IS is…
– a mismatch between external evidence of accomplishments and self-image
– feeling fraudulent or phony, having achieved success not though general ability
– a distorted, unrealistic, unsustainable definition of competence
– a fear of being “discovered” not to be worthy of position or honors
– feeling of having deceived others to achieve position

All people, regardless of their accomplishments in life (like Jodie Foster and Meryl Streep) can be susceptible to IS. But, this can apply even to the “Meryls” of science. Or the supporting actors of science. Jason gives a poignant example in the form of John Asher Johnson (with Dr. Johnson’s permission of course), quoting Dr. Johnson’s own talks and feelings of IS.

What are some of the misconceptions that contribute to IS?
– success is primarily dye to extreme amounts of narrow technical competence (“The Cult of Smart”)
– competence is a fixed trait that some people have others do not
– the most successful, competent people are perfectionists who never make a mistake and who never take on a problem without the necessary preparation

Well…none of these are actually true statements! Academic and research success is not based on any one quality, but rather exists in multiple dimensions. These include: the ability to identify important and answerable questions, the adeptness at basic complex problem solving, the ability to persevere on a problem, the possession of knowledge and skills, curiosity, luck (whether random or manufactured), and communication. This list comes from Ed Turner and Scott Tremaine, and is expounded upon by John Johnson on his own blog post.

The point of this is that success in academia is a constantly evolving process, and it acquired from this set of skills that improve and evolve with practice. No one gets everything right the first try (or even the hundredth!). Most academics work on problems that are outside their area of expertise, and take that risk of mistake in order to work on something interesting or valuable.

How can you begin to overcome IS? It’s something that you can do something about, both for yourself and for others. Talk about it! Normalize it! It happens to everyone, so there is no shame in feeling it. Try to emulate the the personality traits of people you look up to (“fake it until you make it” or “try to live the dream”). Find supportive people to talk to and to discuss the problem. Make note of the nice and complementary things that people say about you: make a file, save them, refer to them, and BELIEVE IT!

IS contains within it some inherent double-standards. You think that your own successes are due to luck or deceptions, but everyone else’s successes are due to skills. You respect your peers’ and superiors’ judgments and knowledge — except when it’s about you. Also…you think so highly of yourself that you can deceive everyone you meet, but you don’t have enough skill to do what you’re trained to do. Acknowledge these logical flaws and use them to combat IS.

IS is not a recognized mental disorder, but happens to everyone. This phenomenon occurs across all demographics, no one is immune. If someone comes to you to talk about this, don’t brush it off, don’t shame it, be supportive and have an open discussion. Learning to combat IS, both in yourself and in others, is something that can only benefit this community at large.


ERES Session 7: Poster Session

As I am also presenting a poster, this isn’t a true-live blog, but rather my thoughts from the poster session. You can see the list of poster titles on the ERES schedule. Based on my own experiences, there are a few components to a poster presentation that help with effective communication:

1. An effective poster: I can go on and on about this, and in fact I have done so. My own poster at the conference was entitled “Best Practices for Effective Poster Design,” a copy of which can be found on my personal blog. If there are three things that I can stress about effective poster design they would be:

– Keep your words clear and concise. Abstracts and large blocks of text don’t belong on a poster. You can have more details on a website if you want, or a link to your paper that has all of the big descriptions, but you poster is *the visual aid for your oral pitch*. It should not be your paper in visual format.

– Use graphics that are easy to understand and that aid in telling your story. Make sure that your audience knows what they should be learning from each graphic even if you’re not there to explain it to them. You can do this with annotations, captions, and other visual clues. This means that the graphics that you use on your poster might not be exactly the same as the graphics that you use in your paper or in your presentation. Use as many graphics as you need to tell your story (a picture is worth a thousand words!), but no more than is necessary.

– Make sure that whatever organization scheme you choose and the style and colors that you choose don’t distract from the content of the poster. All your stylistic choices should only aid content comprehension, and not detract from it. Things like using clear organizational structure, simple backgrounds, and only a few colors will keep your poster from looking cluttered.

2. A clear and concise oral pitch: I say that your poster is the visual aid for your oral pitch, but that means that you need to have an effective oral pitch as well. An oral pitch for your poster is usually around 5 minutes long, and should take you audience through you poster with a more in depth explanation than is on the poster itself. While you are doing this, you should be referencing graphics, charts, or numbers that are on your poster.

3. Good note-taking ability: This is perhaps the most looked over skill in a poster presentation. By note-taking, I mean you the presenter taking notes on the interactions that you have at your poster. Who did you talk to? Who showed interest? What did you talk about? Did they have ideas or followup questions? Did they leave an email for you? All of these are tools that help you, the presenter, learn during your own presentation. Following up with the people you interacted with will also help develop your networking skills.


That’s it for today folks! Thanks for tuning in and I’ll see you bright and early tomorrow morning!


ERES Session 2: Planetary Atmospheres 1

Our first session of planetary atmosphere talks is chaired by PSU graduate student Natasha Batalha.


Hubble Space Telescope Spectroscopy of WISE Detected Brown Dwarfs (Adam C. Schneider, University of Toledo)

The Wide-Field Infrared Survey Explorer (WISE) is an all-sky survey which is maps the sky in four infrared wavelengths, ranging from the near-visible to the mid/far infrared wavelengths. As such, WISE has been very very good at detecting very cool sources that appear bright in the infrared, like brown dwarfs. He’s talking today mostly about Y dwarf stars, which are the coolest known brown dwarfs (but still aren’t planets). Y dwarfs range from around 400 K to 250 K, which is near the temperature of Earth! Y dwarfs pop out in WISE images as bright green points, and we’ve found around 20 of them so far.

“Why WISE Ys?” These anchor our low-temperature spectroscopic models, places where “regular” stars don’t emit a lot of light. Y dwarfs, because they are near the temperatures of planet can help us understand exoplanets, too. They are the “missing link” or the “crossover” objects. They are so faint that we can’t really detect them using ground based telescopes, so we go to space with the Hubble Space Telescope (HST) and WISE.

They use HST to take spectra of their Y dwarf spectra in the Y-band (Why the Y-band of Wise Ys?) because as you get to lower and lower temperature, there is a very distinctive absorption feature in the ammonia band that appears in the Y-band  in spectral models . Their question is: as you go to lower temperatures, why does that absorption feature appear? Where does that ammonia go? Their answer is vertical mixing: the ammonia that would normally appear in the upper layers of the Y-dwarfs and cause that ammonia line are getting mixed into the lower layers, and so there is less ammonia than is expected. However, simultaneously fitting the near- and mid-infrared images is still an issue, as they want to fit all distinctive features at the same time. That issue is ongoing.


Hot Jupiter Atmospheres Revealed with HST/WFC3 (Laura Kreidberg, Univsersity of Chicago)

Transit modeling code: batman = Bad-A* * Transit Model cAlculatioN, currently in development and online at github.  If you help with testing and debugging, Laura Kreidberg will buy you a beverage.

They are observing HJs WASP-43b and WASP-12b using an HST program called “Follow the Water.” As the name suggests, they are looking at water bands of these HJs to get precise water abundance estimates. They find about 0.5-.75 times the solar water abundance in WASP-43b. This is important to know because water is a key molecule in planet formation. WASP-43b also very nicely follows the mass-metalicity relation for planets, that more massive planets have fewer heavy elements than less massive planets.

WASP-12b, also a HJ planet, is the “canonical” carbon rich planet, where the C/O ratio is greater than 1 (recall from the last session, most stars have C/O around 0.5). Previous estimates of the C/O ratio are based on emission spectra, and they are taking a transmission spectra (though the atmosphere), and detected a very strong water feature. This is strange because with a high C/O ratio, most of the oxygen should be bound up in carbon monoxide or carbon dioxide, not water. From their transition spectra they find that an oxygen rich model is more accurate than a carbon-rich model. For this, a C/O < 1 is a million times more likely than C/O > 1 (i.e. an oxygen rich model is much much more likely than a carbon rich model).

In the future, they (and we) need to study the whole planet to characterize the atmosphere (not just the temperature/pressure structure, or the atmosphere) but the whole thing at once. We need to reconcile our results from the emission spectra and the transmission spectra. They want to break the degeneracy between models of temperature/pressure and composition. Hopefully breakthroughs will be forthcoming!


Emission and Phase Curves from 3D Exoplanet Atmospheres (Y. Katherina Feng, UC Santa Cruz)

Katherina (a recent PSU graduate) is talking about the emission from the planetary atmospheres using 3D model atmospheres. This will help us figure out what kinds of spectra we will be seeing when the James Webb Space Telescope finally launches, and so that we can characterize our future observations. JWST will be much more precise and accurate in the infrared than current telescopes, so we need to understand what these planet should look like in JWST spectra. Doing 3D models will help us figure out what limits our accuracy in detection and modeling, and what biases are inherent in our 1D models.

They are testing a new 3D radiative transfer code “SPARC” to test the opacity grids against the 1D models, and have found that there are some discrepancies. If they use the same opacity grids as the 1D models do, the 3D code and the 1D codes match more closely. They apply this to WASP-43b and find that the assumed inclination of the planet has little effect on the spectral solution (they then assume an edge-on system).

When they look at the atmospheres in a variety of wavelengths and phases, they see that atmospheres are really complex 3D structures, and a 1D analysis of the atmosphere may not cut it. First, is there a difference between the day and night sides of the planets? the 3D model does show differences between the day and night side profiles. Is the 1D model biased towards the day side? Yes, it is. Our measurements in should essentially be an average of the day and night side, but 1D models are more biased towards day side values. They plan to test the limits of exoplanet spectroscopy using their new 3D models and ferreting out the biases in our 1D retrievals.


 

We’re going to be changing gears and talking about how to write a successful fellowship or grant proposal with our Fellowship Panel, made up of four successful fellowship winners.


Emerging Researchers in Exoplanet Science Live-Blogging

Hello all!

Tomorrow begins a new adventure for me…live -blogging a scientific conference. The Emerging Researchers in Exoplanet Science Symposium (ERES) is a peer-led science symposium for early career scientists interested in exoplanets and exoplanet-related fields.

As part of the science outreach effort for the symposium, I will be live-blogging the events of the symposium on the ERES website, and posting duplicates here. The blog posts will have summaries of talks and career topic panels.

So, stay tuned, and I will see you tomorrow!


Best Practices for Effective Poster Design

You’re probably (hopefully!) at this blog post because you followed the QR code or link found on the meta-poster entitled “Best Practices for Effective Poster Design.” Well done on being hip to the new technological trends and welcome to the website and blog of Kimberly Cartier (me!). If you’re here because you are a regular reader…kudos to you!

This post includes a PDF of the meta-poster, some more good poster practices and suggestions, sources used in the meta-poster, and additional places to find good material for how to make posters.  The original QR code you used should always take you to the most recent version of this post. Feel free to download a copy of the poster for yourself and distribute to your colleagues and students.

Thanks, and stay tuned!

Note: this post will soon be updated with more comments and suggestions gathered at the AAS227 meeting. Stay tuned!

Note: after the poster session at the 2015 ERES, I have added a section to the bottom of this page containing some of the feedback I got at the conference. There were some really good suggestions and comments! So, taking those into account, I wrote down some things that I would modify about the poster, or comments I got about specific things. If I talked to you about this at ERES and your comment is not on here, leave me a message and I will include it! Thanks to everyone who commented at ERES, I had a great time discussing this with all of you.    -Kim

Note: I also hope to soon have a video of my poster pop to put on here. I had a lot of fun doing the poster pop, and I recommend that people learn how to do them. You can read more about poster pops on my post about them.


Download a PDF of the poster here: Best Practices for Effective Poster Design


Additional good poster practices not found on the meta-poster:

  1. Put a picture of the lead author on the poster. This will help people find you at the conference to talk about  your poster if you’re not standing at your poster when they visit. Make sure that the picture is professional (so, probably not your social media profile picture) and that you’re the only one in it.
    • Note: this was the most controversial part of my poster, based on my experiences at ERES. This was the question I was asked the most. If you are uncomfortable having your face on your poster, then don’t do it. If you are uncomfortable about your poster, it will be noticeable in your oral pitch. However, if this is something you are comfortable doing, having a picture on there can only help with the networking process.
  2. Make sure that there is a contact email address on the poster somewhere. That way, people can contact you after the presentation with questions, comments, or suggestions.
  3. A nitpicky detail that will make your poster look really clean is to make sure that everything within one section is aligned along the tops and along the sides. For example, in the top section of the meta-poster, there are two clearly defined “columns” in the section. The left column has the top text box and the table. The text box and table are aligned on the left to form a straight line. The top text box in the right “column” is aligned along the same horizontal line as the text in the left “column.” Small things like this make your poster look very clean.
  4. Everyone knows to cite text or results that are found in publications. Many people forget to also put citations on figures that are found in publications. Whether or not you are the author of that paper, if the figure is published in a refereed journal it is technically copyrighted, and needs to be cited.
  5. Regarding citations: having citations of the format [Author, et al. (year)] all over your poster is distracting and takes up a lot of space. Use superscripted numbered citations like “cited text[1]” with a numbered reference list at the end to save space.
  6. Some people find that having a reference list on the poster itself to be a waste of space and not completely necessary. I say that it depends greatly on the type of poster you are presenting and where you are presenting. If it’s a research poster that presents a lot of content from published sources, it’s good to have a list of where it all comes from, especially if you’re presenting at a scientific conference where you might run in to someone who wrote the content you are citing. In that case, I recommend the citation format described in #5 to conserve space. If there is mostly original content on the poster, you can more easily justify having your sources elsewhere, like on a website, or even on a separate piece of paper that you tack on next to your poster. Be sure that if you do this that the location of your references is easily found (like having a big honking QR code on your poster). Whatever you choose, always, always cite all of your sources. A plagiarized poster is most definitely not a good poster!
  7. The Layar App is one of the newest ways to augment your poster with additional content. It is, as the name suggests, a way to virtually layer your poster with additional information that can be read by the Layar app on your smartphone or tablet (they call it “Augmented Reality”). This is a great way to show things like the simulation movies that your simulation snapshots come from, alternate plots, links and references, or even just additional content that is in that section. I have not yet used it myself, but have seen it used at AAS a few times and it is really cool.
  8. Before you take your poster to a printer (or even before you start designing your poster) be sure to double check the poster guidelines for your conference. Then, make sure you set the page size for your poster designing program to the right size — and it may be different for each poster!
  9. Tip about printing: printing your poster can be expensive, so shop around. At PSU, the cheapest printer to be found is the Engineering Copy Center. If you don’t have access to that, keep in mind your options for printing are flexible. There is always the classic flat print poster on regular poster paper, but those can be flimsy and may not hold up well to travel or to multiple uses. Glossy photo paper looks really nice, but it much more expensive. Fabric printing is gaining popularity: the quality is nice, the price is reasonable, and the fabric travels really well (you can fold it in your suitcase instead of using a poster tube!). A good compromise if you don’t want to do fabric is to print on regular poster paper and then have it laminated for glossiness and durability. Laminating a poster is often cheaper than printing on glossy photo paper.

The ideas and content contained in the “Good Poster Poster” were compiled from many sources. A lot of the ideas were contributed by the first and second sources in this list. The other sources listed here are also good places to look for examples of good and bad poster designs.

  1. AstroWright’s “Make Award Winning Posters”: Much of the text was contributed by Ming Zhao (with contributions from Jason Wright), and contains examples of award winning posters by Ming and by Sharon Wang as well.
  2. Kathryn Tonsey’s “How to create a poster that graphically communicates your message”: This page by the Chair of Biology and the University of Miami is a good source for how to communicate to different types of audiences and how to layout your poster effectively. Bonus: there are both good and bad examples for each of the themes she talks about.
  3. AstroBetter’s on Presentation Skills: A compilation of a number of other sources for good presentation skills for both oral presentations and poster design and presentation.
  4. Credit for the headshot on the poster goes to 2015 Meadow Lane Photography.
  5. Bonus: exoplanets.org can create beautiful and functional plots using the most up-to-date exoplanet catalogs. If you want to make plots that use current exoplanet information (like the ones on the poster!) but don’t want to have to download and compile all of the data yourself, this is the place to go.

 


Based on my conversations at the ERES poster session, here are some thoughts on what I, and others, thinks would make this poster even better:

  1. Use an even higher resolution graphic for the large PSU logo at the top. The current one still shows up a little blurry when printed full-scale, which is not a good poster practice!
  2. If I were to make a research poster with this design (and indeed, I have), I would use much fewer words than what is on this representation of the poster. The meta-poster, as it is, is meant to be an education and outreach poster, containing educational concepts. Some of those concepts are very difficult to put in an effective graphic, and so were left as words. A confusing or ineffectual graphic is just a waste of space, I say, so I left them as words. For a science poster, you should aim for many fewer words and use more graphics instead.
  3. The organization within the third blue box can be a bit confusing, especially on the right-hand side. There are two separate ideas (using high-quality graphics and choosing appropriate colors and symbols) that don’t really relate to each other well. I would add a light horizontal division line, or more whitespace, between the two to better delineate them.
  4. In the table in the top box, the dots in the bullet points are pretty close to the vertical table lines. If I had more space, I would separate the two more.
  5. If this were a science poster instead of an education poster, I would not use a graphic in the style of a pie chart to demonstrate that point. In a science poster, I would use a histogram instead. However, many non-academic people are more familiar with a pie chart, so I chose that format for this education-based poster.

If I talked to you at ERES about the poster and you have more comments, please feel free to leave me a message! I will happily add your comments to this post for others to comment on.

 

 


Skip to toolbar