Category Archives: Journal Paper

CAPTAIN: Comprehensive Composition Assistance for Photo Taking

Abstract: Many people are interested in taking astonishing photos and sharing with others. Emerging high-tech hardware and software facilitate ubiquitousness and functionality of digital photography. Because composition matters in photography, researchers have leveraged some common composition techniques, such as the rule of thirds, the triangle technique, and the perspective-related techniques, to assess the aesthetic quality of photos computationally. However, composition techniques developed by professionals are far more diverse than well-documented techniques can cover. We leverage the vast under-explored innovations in photography for computational composition assistance, and there is a lack of a holistic framework to capture important aspects of a given scene and help individuals by constructive clues to take a better shot in their adventure. We propose a comprehensive framework, named CAPTAIN (Composition Assistance for Photo Taking), containing integrated deep-learned semantic detectors, sub-genre categorization, artistic pose clustering, personalized aesthetics-based image retrieval, and style set matching. The framework is backed by a large dataset crawled from a photo-sharing Website with mostly photography enthusiasts and professionals.
The work proposes a sequence of steps that have not been explored in the past by researchers.
The work addresses personal preferences for composition through presenting a ranked-list of photographs to the user based on user-specified weights in the similarity measure. We believe our design leveraging user-defined preferences. Our framework extracts ingredients of a given snapshot of a scene (i.e. the scene that the user is interested in taking a picture of) as a set of composition-related features ranging from low-level features such as color, pattern, and texture to high-level features such as pose, category, rating, gender, and object. Our composition model, indexed offline, is used to provide visual exemplars as recommendations for the scene, which is a novel model for aesthetics-related image retrieval. We believe our design leveraging user-defined preferences The matching algorithm recognizes the best shot among a sequence of shots with respect to the user’s preferred style set. We have conducted a number of experiments on the newly proposed components and reported findings. A user study demonstrates that the work is useful to those taking photos.

Keywords: Computational Composition, Image Aesthetics, Photography, Deep Learning, Image Retrieval

captain_springer_ijcv

Auction-based Resource Management in Computer Architecture

ABSTRACT

Resource management systems rely on a centralized approach to manage applications running on each resource. The centralized resource management system is not efficient and scalable for large-scale servers as the number of applications running on shared resources is increasing dramatically and the centralized manager may not have enough information about applications’ need.

This work proposes a decentralized auction-based resource management approach to reach an optimal strategy in a resource competition game. The applications learn through repeated interactions to select their optimal action for shared resources. Specifically, we investigate two case studies of cache competition game and main processor and coprocessor congestion game. We enforce costs for each resource and derive bidding strategy. Accurate evaluation of the proposed approach show that our distributed allocation is scalable and outperforms the static and traditional approaches.

Full article > tpds-carma

Leveraging big visual data to predict severe weather

NEWS SOURCEs [www.sciencedaily.com/releases/2017/06/170621145133.htm www.eurekalert.org/pub_releases/2017-06/ps-nir062117.php phys.org/news/2017-06-leverages-big-severe-weather.html sciencenewsline.com/news/2017062217300053.html]

Every year, severe weather endangers millions of people and causes billions of dollars in damage worldwide. But new research from Penn State’s College of Information Sciences and Technology (IST) and AccuWeather has found a way to better predict some of these threats by harnessing the power of big data.

The research team, led by doctoral student Mohammad Mahdi Kamani and including IST professor James Wang, computer science doctoral student Farshid Farhat, and AccuWeather forensic meteorologist Stephen Wistar, has developed a new approach for identifying bow echoes in radar images, a phenomenon associated with fierce and violent winds.

“It was inevitable for meteorology to combine big data, computer vision, and data mining algorithms to seek faster, more robust and accurate results,” Kamani said. Their research paper, “Skeleton Matching with Applications in Severe Weather Detection,” was published in the journal of Applied Soft Computing and was funded by the National Science Foundation (NSF).

“I think computer-based methods can provide a third eye to the meteorologists, helping them look at things they don’t have the time or energy for,” Wang said. In the case of bow echoes, this automatic detection would be vital to earlier recognition of severe weather, saving lives and resources.

Wistar, the meteorological authority on the project, explained, “In a line of thunderstorms, a bow echo is one part that moves faster than the other.” As the name suggests, once the weather conditions have fully formed, it resembles the shape of a bow. “It can get really exaggerated,” he said. “It’s important because that’s where you are likely to get serious damage, where trees will come down and roofs get blown off.”

But currently, when the conditions are just beginning to form, it can be easy for forecasters to overlook. “Once it gets to the blatantly obvious point, (a bow echo) jumps out to a meteorologist,” he said. “But on an active weather day? They may not notice it’s just starting to bow.”

To combat this, the research focused on automating the detection of bow echoes. By drawing on the vast historical data collected by the National Oceanic and Atmosphere Administration (NOAA), bow echoes can be automatically identified the instant they begin to form. Wang said, “That’s our project’s fundamental goal—to provide assistance to the meteorologist so they can make decisions quicker and with better accuracy.”

By continually monitoring radar imagery from NOAA, the algorithm is able to scan the entire United States and provide alerts whenever and wherever a bow echo is beginning. During times of active severe weather, when resources are likely to be spread thin, it’s able to provide instant notifications of the development.

“But this is just the first step,” Kamani commented. With the detection algorithm in place, they hope to one day forecast bow echoes before they even form. “The end goal is to have more time to alert people to evacuate or be ready for the straight line winds.” With faster, more precise forecasts, the potential impacts can be significant.

“If you can get even a 10, 15 minute jump and get a warning out earlier pinned down to a certain location instead of entire counties, that’s a huge benefit,” Wistar said. “That could be a real jump for meteorologists if it’s possible. It’s really exciting to see this progress.”

Envisioning the future of meteorology, the researchers see endless potential for the application of big data. “There’s so much we can do,” Wang said. “If we can predict severe thunderstorms better, we can save lives every year.”

Discovering Triangles in Portraits and Landscapes

ABSTRACT

Incorporating the concept of triangles in photos is an effective composition method used by professional photographers for making pictures more interesting or dynamic. Information on the locations of the embedded triangles is valuable for comparing the composition of portrait photos, which can be further leveraged by a retrieval system or used by photographers. This paper presents a system to automatically detect embedded triangles in portrait photos. The problem is challenging because the triangles used in portraits are often not clearly defined

by straight lines. The system first extracts a set of filtered line segments as candidate triangle sides, and then utilizes a modified RANSAC algorithm to fit triangles onto the set of line segments. We propose two metrics, Continuity Ratio and Total Ratio, to evaluate the fitted triangles; those with high fitting scores are taken as detected triangles. Experimental results have demonstrated high accuracy in locating preeminent triangles in portraits without dependence on the camera or lens parameters. To demonstrate the benefits of our method to digital photography, we have developed two novel applications that aim to help users composing high-quality photos. In the first application, we develop a human position and pose recommendation system by retrieving and presenting compositionally similar photos taken by competent photographers. The second application is a novel sketch-based triangle retrieval system which searches for photos containing specific triangular configuration. User studies have been conducted to validate the effectiveness of these approaches.

Skeleton Matching for Severe Weather Detection

Title: Skeleton Matching with Applications in Severe Weather Detection

Authors: Mohammad Mahdi Kamani, Farshid Farhat, Stephen Wistar and James Z. Wang.

Elsevier Journal: Applied Soft Computing, ~27 pages, May 2017.

Abstract:

Severe weather conditions cause an enormous amount of damages around the globe. Bow echo patterns in radar images are associated with a number of these destructive conditions such as damaging winds, hail, thunderstorms, and tornadoes. They are detected manually by meteorologists. In this paper, we propose an automatic framework to detect these patterns with high accuracy by introducing novel skeletonization and shape matching approaches. In this framework, first we extract regions with high probability of occurring bow echo from radar images and apply our skeletonization method to extract the skeleton of those regions. Next, we prune these skeletons using our innovative pruning scheme with fuzzy logic. Then, using our proposed shape descriptor, Skeleton Context, we can extract bow echo features from these skeletons in order to use them in shape matching algorithm and classification step. The output of classification indicates whether these regions are bow echo with over 97% accuracy.

Full Text: Skeleton_Matching_Severe_Weather_Forecasting

NEWS Highlights:
https://www.sciencedaily.com/releases/2017/06/170621145133.htm
https://www.eurekalert.org/pub_releases/2017-06/ps-nir062117.php
https://phys.org/news/2017-06-leverages-big-severe-weather.html
http://www.sciencenewsline.com/news/2017062217300053.html

Big data provides deadly storm alert

Detecting Dominant Vanishing Points in Natural Scenes

Abstract:

Linear perspective is widely used in landscape photography to create the impression of depth on a 2D photo. Automated understanding of linear perspective in landscape photography has several real-world applications, including aesthetics assessment, image retrieval, and on-site feedback for photo composition, yet adequate automated understanding has been elusive. We address this problem by detecting the dominant vanishing point and the associated line structures in a photo. However, natural landscape scenes pose great technical challenges because often the inadequate number of strong edges converging to the dominant vanishing point is inadequate. To overcome this difficulty, we propose a novel vanishing point detection method that exploits global structures in the scene via contour detection. We show that our method significantly outperforms state-of-the-art methods on a public ground truth landscape image dataset that we have created. Based on the detection results, we further demonstrate how our approach to linear perspective understanding provides on-site guidance to amateur photographers on their work through a novel viewpoint-specific image retrieval system.

Full Text > Vanishing_Point_Detection_Landscape

Dataset (Landscape photos from AVA and Flickr)

Natural scenes with vanishing points

Vanishing Point Detection in Photo Composition Analysis

Abstract:
Linear perspective is widely used in landscape photography to create the impression of depth on a 2D photo. Automated understanding of the use of linear perspective in landscape photography has a number of real-world applications, including aesthetics assessment, image retrieval, and on-site feedback for photo composition. We address this problem by detecting vanishing points and the associated line structures in photos. However, natural landscape scenes pose great technical challenges because there are often inadequate number of strong edges converging to the vanishing points. To overcome this difficulty, we propose a novel vanishing point detection method that exploits global structures in the scene via contour detection. We show that our method significantly outperforms state-of-the-art methods on a public ground truth landscape image dataset that we have created. Based on the detection results, we further demonstrate how our approach to linear perspective understanding can be used to provide on-site guidance to amateur photographers on their work through a novel viewpoint-specific image retrieval system.

TMM paper > Vanishing_Point_Detection_Landscape

Dataset (Landscape photos from AVA and Flickr)

Natural scenes with vanishing points

Stochastic Modeling and Optimization of Stragglers

Abstract: MapReduce framework is widely used to parallelize batch jobs since it exploits a high degree of multi-tasking to process them. However, it has been observed that when the number of servers increases, the map phase can take much longer than expected. This paper analytically shows that the stochastic behavior of the servers has a negative effect on the completion time of a MapReduce job, and continuously increasing the number of servers without accurate scheduling can degrade the overall performance. We analytically model the map phase in terms of hardware, system, and application parameters to capture the effects of stragglers on the performance. Mean sojourn time (MST), the time needed to sync the completed tasks at a reducer, is introduced as a performance metric and mathematically formulated. Following that, we stochastically investigate the optimal task scheduling which leads to an equilibrium property in a datacenter with different types of servers. Our experimental results show the performance of the different types of schedulers targeting MapReduce applications. We also show that, in the case of mixed deterministic and stochastic schedulers, there is an optimal scheduler that can always achieve the lowest MST.

Authors: Farshid Farhat and Diman Zad Tootaghaj from Penn State, Yuxiong He from MSR (Microsoft Research)

. The work was done during my visit from MSR in Summer 2015.

Stochastic modeling and optimization of stragglers in mapreduce framework

@phdthesis{farhat2015stochastic,
  title={Stochastic modeling and optimization of stragglers in mapreduce framework},
  author={Farhat, Farshid},
  year={2015},
  school={The Pennsylvania State University}
}

 

Stochastic modeling and optimization of stragglers

@article{farhat2016stochastic,
  title={Stochastic modeling and optimization of stragglers},
  author={Farhat, Farshid and Tootaghaj, Diman and He, Yuxiong and Sivasubramaniam, Anand and Kandemir, Mahmut and Das, Chita},
  journal={IEEE Transactions on Cloud Computing},
  year={2016},
  publisher={IEEE}
}

Big Data Computing: Modeling and Optimization

Abstract:
MapReduce framework is widely used to parallelize batch jobs since it exploits a high degree of multi-tasking to process them. However, it has been observed that when the number of servers increases, the map phase can take much longer than expected. This thesis analytically shows that the stochastic behavior of the servers has a negative effect on the completion time of a MapReduce job, and continuously increasing the number of servers without accurate scheduling can degrade the overall performance. We analytically model the map phase in terms of hardware, system, and application parameters to capture the effects of stragglers on the performance. Mean sojourn time (MST), the time needed to sync the completed tasks at a reducer, is introduced as a performance metric and mathematically formulated. Following that, we stochastically investigate the optimal task scheduling which leads to an equilibrium property in a datacenter with different types of servers. Our experimental results show the performance of the different types of schedulers targeting MapReduce applications. We also show that, in the case of mixed deterministic and stochastic schedulers, there is an optimal scheduler that can always achieve the lowest MST.

• Farshid Farhat, Diman Zad Tootaghaj, Anand Sivasubramaniam, Mahmut Kandemir, and Chita R. Das are with the school of electrical engineering and computer science, the Pennsylvania State University, University Park, PA, 16802, USA. Email: {fuf111,dxz149,anand,kandemir,das}@cse.psu.edu.

• Yuxiong He is with the Cloud Computing Futures group, the Microsoft Research, Redmond, WA 98052 USA. Email: yuxhe@microsoft.com.

• The work was done during my visit from MSR in June 2016 in Redmond WA.

Blind detection of low-rate embedding

Abstract:

Steganalysis of least significant bit (LSB) embedded images in spatial domain has been investigated extensively over the past decade and most well-known LSB steganography methods have been shown to be detectable. However, according to the latest findings in the area, two major issues of very low-rate (VLR) embedding and content-adaptive steganography have remained hard to resolve. The problem of VLR embedding is indeed a generic problem to any steganalyser, while the issue of adaptive embedding specifically depends on the hiding algorithm employed. The latter challenge has recently been brought up again to the area of LSB steganalysis by highly undetectable stego image steganography that offers a content-adaptive embedding scheme for grey-scale images. The authors new image steganalysis method suggests analysis of the relative norm of the image Clouds manipulated in an LSB embedding system. The method is a self-dependent image analysis and is capable of operating on low-resolution images. The proposed algorithm is applied to the image in spatial domain through image Clouding, relative auto-decorrelation features extraction and quadratic rate estimation, as the main steps of the proposed analysis procedure. The authors then introduce and use new statistical features, Clouds-Min-Sum and Local-Entropies-Sum, which improve both the detection accuracy and the embedding rate estimation. They analytically verify the functionality of the scheme. Their simulation results show that the proposed approach outperforms some well known, powerful LSB steganalysis schemes, in terms of true and false detection rates and mean squared error.