Category Archives: Journal Paper

Leveraging big visual data to predict severe weather conditions

NEWS SOURCEs [www.sciencedaily.com/releases/2017/06/170621145133.htm www.eurekalert.org/pub_releases/2017-06/ps-nir062117.php phys.org/news/2017-06-leverages-big-severe-weather.html sciencenewsline.com/news/2017062217300053.html]

Every year, severe weather endangers millions of people and causes billions of dollars in damage worldwide. But new research from Penn State’s College of Information Sciences and Technology (IST) and AccuWeather has found a way to better predict some of these threats by harnessing the power of big data.

The research team, led by doctoral student Mohammad Mahdi Kamani and including IST professor James Wang, computer science doctoral student Farshid Farhat, and AccuWeather forensic meteorologist Stephen Wistar, has developed a new approach for identifying bow echoes in radar images, a phenomenon associated with fierce and violent winds.

“It was inevitable for meteorology to combine big data, computer vision, and data mining algorithms to seek faster, more robust and accurate results,” Kamani said. Their research paper, “Skeleton Matching with Applications in Severe Weather Detection,” was published in the journal of Applied Soft Computing and was funded by the National Science Foundation (NSF).

“I think computer-based methods can provide a third eye to the meteorologists, helping them look at things they don’t have the time or energy for,” Wang said. In the case of bow echoes, this automatic detection would be vital to earlier recognition of severe weather, saving lives and resources.

Wistar, the meteorological authority on the project, explained, “In a line of thunderstorms, a bow echo is one part that moves faster than the other.” As the name suggests, once the weather conditions have fully formed, it resembles the shape of a bow. “It can get really exaggerated,” he said. “It’s important because that’s where you are likely to get serious damage, where trees will come down and roofs get blown off.”

But currently, when the conditions are just beginning to form, it can be easy for forecasters to overlook. “Once it gets to the blatantly obvious point, (a bow echo) jumps out to a meteorologist,” he said. “But on an active weather day? They may not notice it’s just starting to bow.”

To combat this, the research focused on automating the detection of bow echoes. By drawing on the vast historical data collected by the National Oceanic and Atmosphere Administration (NOAA), bow echoes can be automatically identified the instant they begin to form. Wang said, “That’s our project’s fundamental goal—to provide assistance to the meteorologist so they can make decisions quicker and with better accuracy.”

By continually monitoring radar imagery from NOAA, the algorithm is able to scan the entire United States and provide alerts whenever and wherever a bow echo is beginning. During times of active severe weather, when resources are likely to be spread thin, it’s able to provide instant notifications of the development.

“But this is just the first step,” Kamani commented. With the detection algorithm in place, they hope to one day forecast bow echoes before they even form. “The end goal is to have more time to alert people to evacuate or be ready for the straight line winds.” With faster, more precise forecasts, the potential impacts can be significant.

“If you can get even a 10, 15 minute jump and get a warning out earlier pinned down to a certain location instead of entire counties, that’s a huge benefit,” Wistar said. “That could be a real jump for meteorologists if it’s possible. It’s really exciting to see this progress.”

Envisioning the future of meteorology, the researchers see endless potential for the application of big data. “There’s so much we can do,” Wang said. “If we can predict severe thunderstorms better, we can save lives every year.”

Discovering Triangles in Portraits for Supporting Photographic Creation

ABSTRACT

Incorporating the concept of triangles in photos is an effective composition method used by professional photographers for making pictures more interesting or dynamic. Information on the locations of the embedded triangles is valuable for comparing the composition of portrait photos, which can be further leveraged by a retrieval system or used by photographers. This paper presents a system to automatically detect embedded triangles in portrait photos. The problem is challenging because the triangles used in portraits are often not clearly defined

by straight lines. The system first extracts a set of filtered line segments as candidate triangle sides, and then utilizes a modified RANSAC algorithm to fit triangles onto the set of line segments. We propose two metrics, Continuity Ratio and Total Ratio, to evaluate the fitted triangles; those with high fitting scores are taken as detected triangles. Experimental results have demonstrated high accuracy in locating preeminent triangles in portraits without dependence on the camera or lens parameters. To demonstrate the benefits of our method to digital photography, we have developed two novel applications that aim to help users composing high-quality photos. In the first application, we develop a human position and pose recommendation system by retrieving and presenting compositionally similar photos taken by competent photographers. The second application is a novel sketch-based triangle retrieval system which searches for photos containing specific triangular configuration. User studies have been conducted to validate the effectiveness of these approaches.

Skeleton Matching with Applications in Severe Weather Detection

Title: Skeleton Matching with Applications in Severe Weather Detection

Authors: Mohammad Mahdi Kamani, Farshid Farhat, Stephen Wistar and James Z. Wang.

Elsevier Journal: Applied Soft Computing, ~27 pages, May 2017.

Abstract:

Severe weather conditions cause an enormous amount of damages around the globe. Bow echo patterns in radar images are associated with a number of these destructive conditions such as damaging winds, hail, thunderstorms, and tornadoes. They are detected manually by meteorologists. In this paper, we propose an automatic framework to detect these patterns with high accuracy by introducing novel skeletonization and shape matching approaches. In this framework, first we extract regions with high probability of occurring bow echo from radar images and apply our skeletonization method to extract the skeleton of those regions. Next, we prune these skeletons using our innovative pruning scheme with fuzzy logic. Then, using our proposed shape descriptor, Skeleton Context, we can extract bow echo features from these skeletons in order to use them in shape matching algorithm and classification step. The output of classification indicates whether these regions are bow echo with over 97% accuracy.

Full Text: Skeleton_Matching_Severe_Weather_Forecasting

NEWS Highlights:

https://www.sciencedaily.com/releases/2017/06/170621145133.htm

https://www.eurekalert.org/pub_releases/2017-06/ps-nir062117.php

https://phys.org/news/2017-06-leverages-big-severe-weather.html

http://www.sciencenewsline.com/news/2017062217300053.html

Big data provides deadly storm alert

Detecting Dominant Vanishing Points in Natural Scenes with Application to Composition-Sensitive Image Retrieval

Abstract:

Linear perspective is widely used in landscape photography to create the impression of depth on a 2D photo. Automated understanding of linear perspective in landscape photography has several real-world applications, including aesthetics assessment, image retrieval, and on-site feedback for photo composition, yet adequate automated understanding has been elusive. We address this problem by detecting the dominant vanishing point and the associated line structures in a photo. However, natural landscape scenes pose great technical challenges because often the inadequate number of strong edges converging to the dominant vanishing point is inadequate. To overcome this difficulty, we propose a novel vanishing point detection method that exploits global structures in the scene via contour detection. We show that our method significantly outperforms state-of-the-art methods on a public ground truth landscape image dataset that we have created. Based on the detection results, we further demonstrate how our approach to linear perspective understanding provides on-site guidance to amateur photographers on their work through a novel viewpoint-specific image retrieval system.

Full Text > Vanishing_Point_Detection_Landscape

Dataset (Landscape photos from AVA and Flickr)

Natural scenes with vanishing points

Detecting Vanishing Points in Natural Scenes with Application in Photo Composition Analysis

Abstract:
Linear perspective is widely used in landscape photography to create the impression of depth on a 2D photo. Automated understanding of the use of linear perspective in landscape photography has a number of real-world applications, including aesthetics assessment, image retrieval, and on-site feedback for photo composition. We address this problem by detecting vanishing points and the associated line structures in photos. However, natural landscape scenes pose great technical challenges because there are often inadequate number of strong edges converging to the vanishing points. To overcome this difficulty, we propose a novel vanishing point detection method that exploits global structures in the scene via contour detection. We show that our method significantly outperforms state-of-the-art methods on a public ground truth landscape image dataset that we have created. Based on the detection results, we further demonstrate how our approach to linear perspective understanding can be used to provide on-site guidance to amateur photographers on their work through a novel viewpoint-specific image retrieval system.

TMM paper > Vanishing_Point_Detection_Landscape

Dataset (Landscape photos from AVA and Flickr)

Natural scenes with vanishing points

Stochastic Modeling and Optimization of Stragglers

Abstract:
MapReduce framework is widely used to parallelize batch jobs since it exploits a high degree of multi-tasking to process them. However, it has been observed that when the number of servers increases, the map phase can take much longer than expected. This paper analytically shows that the stochastic behavior of the servers has a negative effect on the completion time of a MapReduce job, and continuously increasing the number of servers without accurate scheduling can degrade the overall performance. We analytically model the map phase in terms of hardware, system, and application parameters to capture the effects of stragglers on the performance. Mean sojourn time (MST), the time needed to sync the completed tasks at a reducer, is introduced as a performance metric and mathematically formulated. Following that, we stochastically investigate the optimal task scheduling which leads to an equilibrium property in a datacenter with different types of servers. Our experimental results show the performance of the different types of schedulers targeting MapReduce applications. We also show that, in the case of mixed deterministic and stochastic schedulers, there is an optimal scheduler that can always achieve the lowest MST.

 

Stochastic modeling and optimization of stragglers in mapreduce framework

@phdthesis{farhat2015stochastic,
  title={Stochastic modeling and optimization of stragglers in mapreduce framework},
  author={Farhat, Farshid},
  year={2015},
  school={The Pennsylvania State University}
}

 

Stochastic modeling and optimization of stragglers

@article{farhat2016stochastic,
  title={Stochastic modeling and optimization of stragglers},
  author={Farhat, Farshid and Tootaghaj, Diman and He, Yuxiong and Sivasubramaniam, Anand and Kandemir, Mahmut and Das, Chita},
  journal={IEEE Transactions on Cloud Computing},
  year={2016},
  publisher={IEEE}
}

Towards blind detection of low-rate spatial embedding in image steganalysis

Abstract:

Steganalysis of least significant bit (LSB) embedded images in spatial domain has been investigated extensively over the past decade and most well-known LSB steganography methods have been shown to be detectable. However, according to the latest findings in the area, two major issues of very low-rate (VLR) embedding and content-adaptive steganography have remained hard to resolve. The problem of VLR embedding is indeed a generic problem to any steganalyser, while the issue of adaptive embedding specifically depends on the hiding algorithm employed. The latter challenge has recently been brought up again to the area of LSB steganalysis by highly undetectable stego image steganography that offers a content-adaptive embedding scheme for grey-scale images. The authors new image steganalysis method suggests analysis of the relative norm of the image Clouds manipulated in an LSB embedding system. The method is a self-dependent image analysis and is capable of operating on low-resolution images. The proposed algorithm is applied to the image in spatial domain through image Clouding, relative auto-decorrelation features extraction and quadratic rate estimation, as the main steps of the proposed analysis procedure. The authors then introduce and use new statistical features, Clouds-Min-Sum and Local-Entropies-Sum, which improve both the detection accuracy and the embedding rate estimation. They analytically verify the functionality of the scheme. Their simulation results show that the proposed approach outperforms some well known, powerful LSB steganalysis schemes, in terms of true and false detection rates and mean squared error.

 

Modeling and Optimization of Straggling Mappers

ABSTRACT
MapReduce framework is widely used to parallelize batch jobs since it exploits a high degree of multi-tasking to process them. However, it has been observed that when the number of mappers increases, the map phase can take much longer than expected. This paper analytically shows that stochastic behavior of mapper nodes has a negative effect on the completion time of a MapReduce job, and continuously increasing the number of mappers without accurate scheduling can degrade the overall performance. We analytically capture the effects of stragglers (delayed mappers) on the performance. Based on an observed delayed exponential distribution (DED) of the response time of mappers, we then model the map phase by means of hardware, system, and application parameters. Mean sojourn time (MST), the time needed to sync the completed map tasks at one reducer, is mathematically formulated. Following that, we optimize MST by finding the task inter-arrival time to each mapper node. The optimal mapping problem leads to an equilibrium property investigated for different types of inter-arrival and service time distributions in a heterogeneous datacenter (i.e., a datacenter with different types of nodes). Our experimental results show the performance and important parameters of the different types of schedulers targeting MapReduce applications. We also show that, in the case of mixed deterministic and stochastic schedulers, there is an optimal scheduler that can always achieve the lowest MST.

[Tech Report] [Master Thesis] [IEEE Trans]

Last version > MapReduce_Performance_Optimization

Eigenvalues-based LSB steganalysis

Abstract:
So far, various components of image characteristics have been used for steganalysis, including the histogram characteristic function, adjacent colors distribution, and sample pair analysis. However, some certain steganography methods have been proposed that can thwart some analysis approaches through managing the embedding patterns. In this regard, the present paper is intended to introduce a new analytical method for detecting stego images, which is robust against some of the embedding patterns designed specifically to foil steganalysis attempts. The proposed approach is based on the analysis of the eigenvalues of the cover correlation matrix used for the purpose of the study. Image cloud partitioning, vertical correlation function computation, constellation of the correlated data, and eigenvalues examination are the major challenging stages of this analysis method. The proposed method uses the LSB plane of images in spatial domain, extendable to transform domain, to detect low embedding rates-a major concern in the area of the LSB steganography. The simulation results based on deviation detection and rate estimation methods indicated that the proposed approach outperforms some well-known LSB steganalysis methods, specifically at low embedding rates.

Eigenvalues-based LSB steganalysis