Category Archives: Conference Paper

CAGE: Contention-Aware Game-theoretic modEl

Abstract

Traditional resource management systems rely on a centralized approach to manage users running on each resource. The centralized resource management system is not scalable for large-scale servers as the number of users running on shared resources is increasing dramatically and the centralized manager may not have enough information about applications’ need. In this paper we propose a distributed game-theoretic resource management approach using market auction mechanism to find optimal strategy in a resource competition game. The applications learn through repeated interactions to choose their action on choosing the shared resources. Specifically, we look into two case studies of cache competition game and main processor and coprocessor congestion game. We enforce costs for each resource and derive bidding strategy. Accurate evaluation of the proposed approach show that our distributed allocation is scalable and outperforms the static and traditional approaches.

Draft > CAGE

 

Deep-learned Models and Photography Idea Retrieval

Intelligent Portrait Composition Assistance (IPCA)
Farshid Farhat, Mohammad Kamani, Sahil Mishra, James Wang
ACM Multimedia 2017, Mountain View, CA, USA
(Acceptance rate = 64/495 = 12.93%)

ABSTRACT: Retrieving photography ideas corresponding to a given location facilitates the usage of smart cameras, where there is a high interest among amateurs and enthusiasts to take astonishing photos at anytime and in any location. Existing research captures some aesthetic techniques such as the rule of thirds, triangle, and perspective, and retrieves useful feedbacks based on one technique. However, they are restricted to a particular technique and the retrieved results have room to improve as they can be limited to the quality of the query. There is a lack of a holistic framework to capture important aspects of a given scene and give a novice photographer informative feedback to take a better shot in his/her photography adventure. This work proposes an intelligent framework of portrait composition using our deep-learned models and image retrieval methods. A highly-rated web-crawled portrait dataset is exploited for retrieval purposes. Our framework detects and extracts ingredients of a given scene representing as a correlated hierarchical model. It then matches extracted semantics with the dataset of aesthetically composed photos to investigate a ranked list of photography ideas, and gradually optimizes the human pose and other artistic aspects of the composed scene supposed to be captured. The conducted user study demonstrates that our approach is more helpful than the other constructed feedback retrieval systems.

Please cite our paper if you are using our professional portrait dataset.

@inproceedings{Farhat:2017:IPC:3126686.3126710,
author = {Farhat, Farshid and Kamani, Mohammad Mahdi and Mishra, Sahil and Wang, James Z.},
title = {Intelligent Portrait Composition Assistance: Integrating Deep-learned Models and Photography Idea Retrieval},
booktitle = {Proceedings of the on Thematic Workshops of ACM Multimedia 2017},
series = {Thematic Workshops ’17},
year = {2017},
isbn = {978-1-4503-5416-5},
location = {Mountain View, California, USA},
pages = {17–25},
numpages = {9},
url = {http://doi.acm.org/10.1145/3126686.3126710},
doi = {10.1145/3126686.3126710},
acmid = {3126710},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {deep learning, image aesthetics, image retrieval., photographic composition, portrait photography, smart camera},
}

 

Leveraging big visual data to predict severe weather

NEWS SOURCEs [www.sciencedaily.com/releases/2017/06/170621145133.htm www.eurekalert.org/pub_releases/2017-06/ps-nir062117.php phys.org/news/2017-06-leverages-big-severe-weather.html sciencenewsline.com/news/2017062217300053.html]

Every year, severe weather endangers millions of people and causes billions of dollars in damage worldwide. But new research from Penn State’s College of Information Sciences and Technology (IST) and AccuWeather has found a way to better predict some of these threats by harnessing the power of big data.

The research team, led by doctoral student Mohammad Mahdi Kamani and including IST professor James Wang, computer science doctoral student Farshid Farhat, and AccuWeather forensic meteorologist Stephen Wistar, has developed a new approach for identifying bow echoes in radar images, a phenomenon associated with fierce and violent winds.

“It was inevitable for meteorology to combine big data, computer vision, and data mining algorithms to seek faster, more robust and accurate results,” Kamani said. Their research paper, “Skeleton Matching with Applications in Severe Weather Detection,” was published in the journal of Applied Soft Computing and was funded by the National Science Foundation (NSF).

“I think computer-based methods can provide a third eye to the meteorologists, helping them look at things they don’t have the time or energy for,” Wang said. In the case of bow echoes, this automatic detection would be vital to earlier recognition of severe weather, saving lives and resources.

Wistar, the meteorological authority on the project, explained, “In a line of thunderstorms, a bow echo is one part that moves faster than the other.” As the name suggests, once the weather conditions have fully formed, it resembles the shape of a bow. “It can get really exaggerated,” he said. “It’s important because that’s where you are likely to get serious damage, where trees will come down and roofs get blown off.”

But currently, when the conditions are just beginning to form, it can be easy for forecasters to overlook. “Once it gets to the blatantly obvious point, (a bow echo) jumps out to a meteorologist,” he said. “But on an active weather day? They may not notice it’s just starting to bow.”

To combat this, the research focused on automating the detection of bow echoes. By drawing on the vast historical data collected by the National Oceanic and Atmosphere Administration (NOAA), bow echoes can be automatically identified the instant they begin to form. Wang said, “That’s our project’s fundamental goal—to provide assistance to the meteorologist so they can make decisions quicker and with better accuracy.”

By continually monitoring radar imagery from NOAA, the algorithm is able to scan the entire United States and provide alerts whenever and wherever a bow echo is beginning. During times of active severe weather, when resources are likely to be spread thin, it’s able to provide instant notifications of the development.

“But this is just the first step,” Kamani commented. With the detection algorithm in place, they hope to one day forecast bow echoes before they even form. “The end goal is to have more time to alert people to evacuate or be ready for the straight line winds.” With faster, more precise forecasts, the potential impacts can be significant.

“If you can get even a 10, 15 minute jump and get a warning out earlier pinned down to a certain location instead of entire counties, that’s a huge benefit,” Wistar said. “That could be a real jump for meteorologists if it’s possible. It’s really exciting to see this progress.”

Envisioning the future of meteorology, the researchers see endless potential for the application of big data. “There’s so much we can do,” Wang said. “If we can predict severe thunderstorms better, we can save lives every year.”

Shape matching for automated bow echo detection

Abstract:
Severe weather conditions cause enormous amount of damages around the globe. Bow echo patterns in radar images are associated with a number of these destructive thunderstorm conditions such as damaging winds, hail and tornadoes. They are detected manually by meteorologists. In this paper, we propose an automatic framework to detect these patterns with high accuracy by introducing novel skeletonization and shape matching approaches. In this framework, first we extract regions with high probability of occurring bow echo from radar images, and apply our skeletonization method to extract the skeleton of those regions. Next, we prune these skeletons using our innovative pruning scheme with fuzzy logic. Then, using our proposed shape descriptor, Skeleton Context, we can extract bow echo features from these skeletons in order to use them in shape matching algorithm and classification step. The output of classification indicates whether these regions include a bow echo with over 97% accuracy.

Full Text: Bow_echo_detection

Node Architecture and Cloud Workload Characteristics Analysis

Abstract
The combined impact of node architecture and workload characteristics on off-chip network traffic with performance/cost analysis has not been investigated before in the context of emerging cloud applications. Motivated by this observation, this paper performs a thorough characterization of twelve cloud workloads using a full-system datacenter simulation infrastructure. We first study the inherent network characteristics of emerging cloud applications including message inter-arrival times, packet sizes, inter-node communication overhead, self-similarity, and traffic volume. Then, we study the effect of hardware architectural metrics on network traffic. Our experimental analysis reveals that (1) the message arrival times and packet-size distributions exhibit variances across different cloud applications, (2) the inter-arrival times imply a large amount of self-similarity as the number of nodes increase, (3) the node architecture can play a significant role in shaping the overall network traffic, and finally, (4) the applications we study can be broadly divided into those which perform better in a scale-out or scale-up configuration at node level and into two categories, namely, those that have long-duration, low-burst flows and those that have short-duration, high-burst flows. Using the results of (3) and (4), the paper discusses the performance/cost trade-offs for scale-out and scale-up approaches and proposes an analytical model that can be used to predict the communication and computation demand for different configurations. It is shown that the difference between two different node architecture’s performance per dollar cost (under same number of cores system wide) can be as high as 154 percent which disclose the need for accurate characterization of cloud applications before wasting the precious cloud resources by allocating wrong architecture. The results of this study can be used for system modeling, capacity planning and managing heterogeneous resources for large-scale system designs.

Full Text > Combined Impact of Node Architecture and Cloud Workload Characteristics

More info > Diman Zad Tootaghaj ‘s Publications

Modeling and Optimization of Straggling Mappers

ABSTRACT
MapReduce framework is widely used to parallelize batch jobs since it exploits a high degree of multi-tasking to process them. However, it has been observed that when the number of mappers increases, the map phase can take much longer than expected. This paper analytically shows that stochastic behavior of mapper nodes has a negative effect on the completion time of a MapReduce job, and continuously increasing the number of mappers without accurate scheduling can degrade the overall performance. We analytically capture the effects of stragglers (delayed mappers) on the performance. Based on an observed delayed exponential distribution (DED) of the response time of mappers, we then model the map phase by means of hardware, system, and application parameters. Mean sojourn time (MST), the time needed to sync the completed map tasks at one reducer, is mathematically formulated. Following that, we optimize MST by finding the task inter-arrival time to each mapper node. The optimal mapping problem leads to an equilibrium property investigated for different types of inter-arrival and service time distributions in a heterogeneous datacenter (i.e., a datacenter with different types of nodes). Our experimental results show the performance and important parameters of the different types of schedulers targeting MapReduce applications. We also show that, in the case of mixed deterministic and stochastic schedulers, there is an optimal scheduler that can always achieve the lowest MST.

[Tech Report] [Master Thesis] [IEEE Trans]

Last version > MapReduce_Performance_Optimization

SVD and Noise Estimation based Image Steganalysis

Abstract:
We propose a novel image steganalysis method, based on singular value decomposition and noise estimation, for the spatial domain LSB embedding families. We first define a content independence parameter, DS, that is calculated for each LSB embedding rate. Next, we estimate the DS curve and use noise estimation to improve the curve approximation accuracy. It is shown that the proposed approach gives an estimate of the LSB embedding rate, as well as information about the existence of the embedded message (if any). The proposed method can effectively be applied to a wide range of the image LSB steganography families in spatial domain. To evaluate the proposed scheme, we applied the method to a large image database. Using a large image database, simulation results of our steganalysis scheme indicate significant improvement to both true detection and false alarm rates.

Full text > SVD and noise estimation based image steganalysis

Multi-dimensional correlation steganalysis

Abstract:
Multi-dimensional spatial analysis of image pixels have not been much investigated for the steganalysis of the LSB Steganographic methods. Pixel distribution based steganalysis methods could be thwarted by intelligently compensating statistical characteristics of image pixels, as reported in several papers. Simple LSB replacement methods have been improved by introducing smarter LSB embedding approaches, e.g. LSB matching and LSB+ methods, but they are basically the same in the sense of the LSB alteration. A new analytical method to detect LSB stego images is proposed in this paper. Our approach is based on the relative locations of image pixels that are essentially changed in an LSB embedding system. Furthermore, we introduce some new statistical features including “local entropies sum” and “clouds min sum” to achieve a higher performance. Simulation results show that our proposed approach outperforms some well-known LSB steganalysis methods, in terms of detection accuracy and the embedding rate estimation.

Multi-dimensional correlation steganalysis

Game-theoretic model to mitigate packet dropping

Abstract:
Performance of routing is severely degraded when misbehaving nodes drop packets instead of properly forwarding them. In this paper, we propose a Game-Theoretic Adaptive Multipath Routing (GTAMR) protocol to detect and punish selfish or malicious nodes which try to drop information packets in routing phase and defend against collaborative attacks in which nodes try to disrupt communication or save their power. Our proposed algorithm outranks previous schemes because it is resilient against attacks in which more than one node coordinate their misbehavior and can be used in networks which wireless nodes use directional antennas. We then propose a game theoretic strategy, ERTFT, for nodes to promote cooperation. In comparison with other proposed TFT-like strategies, ours is resilient to systematic errors in detection of selfish nodes and does not lead to unending death spirals.

Website > Game-Theoretic Network Simulator (GTNS)

Full text > Game-theoretic approach to mitigate packet dropping in wireless ad-hoc networks

Code > GTNS