Category Archives: Pattern Recognition

Deep-learned Models and Photography Idea Retrieval

Intelligent Portrait Composition Assistance (IPCA)
Farshid Farhat, Mohammad Kamani, Sahil Mishra, James Wang
ACM Multimedia 2017, Mountain View, CA, USA
(Acceptance rate = 64/495 = 12.93%)

ABSTRACT: Retrieving photography ideas corresponding to a given location facilitates the usage of smart cameras, where there is a high interest among amateurs and enthusiasts to take astonishing photos at anytime and in any location. Existing research captures some aesthetic techniques such as the rule of thirds, triangle, and perspective, and retrieves useful feedbacks based on one technique. However, they are restricted to a particular technique and the retrieved results have room to improve as they can be limited to the quality of the query. There is a lack of a holistic framework to capture important aspects of a given scene and give a novice photographer informative feedback to take a better shot in his/her photography adventure. This work proposes an intelligent framework of portrait composition using our deep-learned models and image retrieval methods. A highly-rated web-crawled portrait dataset is exploited for retrieval purposes. Our framework detects and extracts ingredients of a given scene representing as a correlated hierarchical model. It then matches extracted semantics with the dataset of aesthetically composed photos to investigate a ranked list of photography ideas, and gradually optimizes the human pose and other artistic aspects of the composed scene supposed to be captured. The conducted user study demonstrates that our approach is more helpful than the other constructed feedback retrieval systems.

Please cite our paper if you are using our professional portrait dataset.

@inproceedings{Farhat:2017:IPC:3126686.3126710,
author = {Farhat, Farshid and Kamani, Mohammad Mahdi and Mishra, Sahil and Wang, James Z.},
title = {Intelligent Portrait Composition Assistance: Integrating Deep-learned Models and Photography Idea Retrieval},
booktitle = {Proceedings of the on Thematic Workshops of ACM Multimedia 2017},
series = {Thematic Workshops ’17},
year = {2017},
isbn = {978-1-4503-5416-5},
location = {Mountain View, California, USA},
pages = {17–25},
numpages = {9},
url = {http://doi.acm.org/10.1145/3126686.3126710},
doi = {10.1145/3126686.3126710},
acmid = {3126710},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {deep learning, image aesthetics, image retrieval., photographic composition, portrait photography, smart camera},
}

 

Leveraging big visual data to predict severe weather

NEWS SOURCEs [www.sciencedaily.com/releases/2017/06/170621145133.htm www.eurekalert.org/pub_releases/2017-06/ps-nir062117.php phys.org/news/2017-06-leverages-big-severe-weather.html sciencenewsline.com/news/2017062217300053.html]

Every year, severe weather endangers millions of people and causes billions of dollars in damage worldwide. But new research from Penn State’s College of Information Sciences and Technology (IST) and AccuWeather has found a way to better predict some of these threats by harnessing the power of big data.

The research team, led by doctoral student Mohammad Mahdi Kamani and including IST professor James Wang, computer science doctoral student Farshid Farhat, and AccuWeather forensic meteorologist Stephen Wistar, has developed a new approach for identifying bow echoes in radar images, a phenomenon associated with fierce and violent winds.

“It was inevitable for meteorology to combine big data, computer vision, and data mining algorithms to seek faster, more robust and accurate results,” Kamani said. Their research paper, “Skeleton Matching with Applications in Severe Weather Detection,” was published in the journal of Applied Soft Computing and was funded by the National Science Foundation (NSF).

“I think computer-based methods can provide a third eye to the meteorologists, helping them look at things they don’t have the time or energy for,” Wang said. In the case of bow echoes, this automatic detection would be vital to earlier recognition of severe weather, saving lives and resources.

Wistar, the meteorological authority on the project, explained, “In a line of thunderstorms, a bow echo is one part that moves faster than the other.” As the name suggests, once the weather conditions have fully formed, it resembles the shape of a bow. “It can get really exaggerated,” he said. “It’s important because that’s where you are likely to get serious damage, where trees will come down and roofs get blown off.”

But currently, when the conditions are just beginning to form, it can be easy for forecasters to overlook. “Once it gets to the blatantly obvious point, (a bow echo) jumps out to a meteorologist,” he said. “But on an active weather day? They may not notice it’s just starting to bow.”

To combat this, the research focused on automating the detection of bow echoes. By drawing on the vast historical data collected by the National Oceanic and Atmosphere Administration (NOAA), bow echoes can be automatically identified the instant they begin to form. Wang said, “That’s our project’s fundamental goal—to provide assistance to the meteorologist so they can make decisions quicker and with better accuracy.”

By continually monitoring radar imagery from NOAA, the algorithm is able to scan the entire United States and provide alerts whenever and wherever a bow echo is beginning. During times of active severe weather, when resources are likely to be spread thin, it’s able to provide instant notifications of the development.

“But this is just the first step,” Kamani commented. With the detection algorithm in place, they hope to one day forecast bow echoes before they even form. “The end goal is to have more time to alert people to evacuate or be ready for the straight line winds.” With faster, more precise forecasts, the potential impacts can be significant.

“If you can get even a 10, 15 minute jump and get a warning out earlier pinned down to a certain location instead of entire counties, that’s a huge benefit,” Wistar said. “That could be a real jump for meteorologists if it’s possible. It’s really exciting to see this progress.”

Envisioning the future of meteorology, the researchers see endless potential for the application of big data. “There’s so much we can do,” Wang said. “If we can predict severe thunderstorms better, we can save lives every year.”

Discovering Triangles in Portraits and Landscapes

ABSTRACT

Incorporating the concept of triangles in photos is an effective composition method used by professional photographers for making pictures more interesting or dynamic. Information on the locations of the embedded triangles is valuable for comparing the composition of portrait photos, which can be further leveraged by a retrieval system or used by photographers. This paper presents a system to automatically detect embedded triangles in portrait photos. The problem is challenging because the triangles used in portraits are often not clearly defined

by straight lines. The system first extracts a set of filtered line segments as candidate triangle sides, and then utilizes a modified RANSAC algorithm to fit triangles onto the set of line segments. We propose two metrics, Continuity Ratio and Total Ratio, to evaluate the fitted triangles; those with high fitting scores are taken as detected triangles. Experimental results have demonstrated high accuracy in locating preeminent triangles in portraits without dependence on the camera or lens parameters. To demonstrate the benefits of our method to digital photography, we have developed two novel applications that aim to help users composing high-quality photos. In the first application, we develop a human position and pose recommendation system by retrieving and presenting compositionally similar photos taken by competent photographers. The second application is a novel sketch-based triangle retrieval system which searches for photos containing specific triangular configuration. User studies have been conducted to validate the effectiveness of these approaches.

Skeleton Matching for Severe Weather Detection

Title: Skeleton Matching with Applications in Severe Weather Detection

Authors: Mohammad Mahdi Kamani, Farshid Farhat, Stephen Wistar and James Z. Wang.

Elsevier Journal: Applied Soft Computing, ~27 pages, May 2017.

Abstract:

Severe weather conditions cause an enormous amount of damages around the globe. Bow echo patterns in radar images are associated with a number of these destructive conditions such as damaging winds, hail, thunderstorms, and tornadoes. They are detected manually by meteorologists. In this paper, we propose an automatic framework to detect these patterns with high accuracy by introducing novel skeletonization and shape matching approaches. In this framework, first we extract regions with high probability of occurring bow echo from radar images and apply our skeletonization method to extract the skeleton of those regions. Next, we prune these skeletons using our innovative pruning scheme with fuzzy logic. Then, using our proposed shape descriptor, Skeleton Context, we can extract bow echo features from these skeletons in order to use them in shape matching algorithm and classification step. The output of classification indicates whether these regions are bow echo with over 97% accuracy.

Full Text: Skeleton_Matching_Severe_Weather_Forecasting

NEWS Highlights:
https://www.sciencedaily.com/releases/2017/06/170621145133.htm
https://www.eurekalert.org/pub_releases/2017-06/ps-nir062117.php
https://phys.org/news/2017-06-leverages-big-severe-weather.html
http://www.sciencenewsline.com/news/2017062217300053.html

Big data provides deadly storm alert

Detecting Dominant Vanishing Points in Natural Scenes

Abstract:

Linear perspective is widely used in landscape photography to create the impression of depth on a 2D photo. Automated understanding of linear perspective in landscape photography has several real-world applications, including aesthetics assessment, image retrieval, and on-site feedback for photo composition, yet adequate automated understanding has been elusive. We address this problem by detecting the dominant vanishing point and the associated line structures in a photo. However, natural landscape scenes pose great technical challenges because often the inadequate number of strong edges converging to the dominant vanishing point is inadequate. To overcome this difficulty, we propose a novel vanishing point detection method that exploits global structures in the scene via contour detection. We show that our method significantly outperforms state-of-the-art methods on a public ground truth landscape image dataset that we have created. Based on the detection results, we further demonstrate how our approach to linear perspective understanding provides on-site guidance to amateur photographers on their work through a novel viewpoint-specific image retrieval system.

Full Text > Vanishing_Point_Detection_Landscape

Dataset (Landscape photos from AVA and Flickr)

Natural scenes with vanishing points