In Davies & Wagner (2013), the authors describe and motivate an ongoing (circa 2013) search of Lunar Reconnaissance Orbiter imagery for unusual features at the LRO Laboratory at Arizona State University.
The first point of note that I found in the paper was the initial statement of assumptions. They state that anything we could find (outside of communication SETI) will most likely be something “post-biological” due to the long timescales we would have expected it to last. Then they directly state that we have no reasonable way to extrapolate our own technology to guess what type of artifacts we should be looking for. An openness I found refreshing.
What follows is an intriguing and frank basis of motivation for the search of the LRO database (or any database). In the case that we don’t know what we are looking for, they propose that the best way to make meaningful progress with limited resources is to simply search all existing databases for “artificiality” and that choosing which databases to search should be tied only to cost rather than the likelihood of results. This was an interesting thought. Cost is obviously a good thing to keep in mind when deciding on what to do with resources, but in every other field, missions and grants are proposed and evaluated with heavy basis placed on their merit. But what is it is nearly impossible to quantify the merit of an experiment? We can estimate the number of planets TESS will find (>20,000) or the amount of stars GAIA will get parallax for (>1 billion), but there’s no way for us to know the amount of ETI signatures will be found by performing any given search (although, one could cheekily say 0 based on the results of all other searches). While this seems to make sense at first glance, I think it makes some false equivalencies. Just because we don’t know the utility of two different searches does not mean their utility is equal, as this line of thinking implies. If cost is the only thing that matters, I should just submit two half-cost proposals that each cover half of a database. That’s two half-cost searches for the price of one! In the same vein, some databases are clearly more valuable to search than others. Imagine two equivalent cameras take pictures of the Martian surface, the only difference being one of the cameras can take pictures of much higher resolution. It is clear that while it would be more expensive, looking at the higher resolution data would be much more useful. While it is difficult to quantify the utility of SETI searches, they can be viewed as setting limits on the parameter space (a la Jill Tarter’s Needle In A Cosmic Haystack) that SETI artifacts can inhabit (see Appendix A Wright & Oman-Reagan (2017) for a motivation of this type of quantification).
Besides these points, the authors discuss how automation is ill-suited towards artifact SETI as we have to program in exactly what signatures we want to look for. Currently, they have some students and faculty searching the Narrow Angle Camera images for interesting features by eye. They suggest that the best strategy may be to utilize the time of enthusiastic volunteers to perform the image analysis.