Tag Archives: Mohammad Mahdi Kamani

Deep-learned Models and Photography Idea Retrieval

Intelligent Portrait Composition Assistance (IPCA)
Farshid Farhat, Mohammad Kamani, Sahil Mishra, James Wang
ACM Multimedia 2017, Mountain View, CA, USA
(Acceptance rate = 64/495 = 12.93%)

ABSTRACT: Retrieving photography ideas corresponding to a given location facilitates the usage of smart cameras, where there is a high interest among amateurs and enthusiasts to take astonishing photos at anytime and in any location. Existing research captures some aesthetic techniques such as the rule of thirds, triangle, and perspective, and retrieves useful feedbacks based on one technique. However, they are restricted to a particular technique and the retrieved results have room to improve as they can be limited to the quality of the query. There is a lack of a holistic framework to capture important aspects of a given scene and give a novice photographer informative feedback to take a better shot in his/her photography adventure. This work proposes an intelligent framework of portrait composition using our deep-learned models and image retrieval methods. A highly-rated web-crawled portrait dataset is exploited for retrieval purposes. Our framework detects and extracts ingredients of a given scene representing as a correlated hierarchical model. It then matches extracted semantics with the dataset of aesthetically composed photos to investigate a ranked list of photography ideas, and gradually optimizes the human pose and other artistic aspects of the composed scene supposed to be captured. The conducted user study demonstrates that our approach is more helpful than the other constructed feedback retrieval systems.

Please cite our paper if you are using our professional portrait dataset.

@inproceedings{Farhat:2017:IPC:3126686.3126710,
author = {Farhat, Farshid and Kamani, Mohammad Mahdi and Mishra, Sahil and Wang, James Z.},
title = {Intelligent Portrait Composition Assistance: Integrating Deep-learned Models and Photography Idea Retrieval},
booktitle = {Proceedings of the on Thematic Workshops of ACM Multimedia 2017},
series = {Thematic Workshops ’17},
year = {2017},
isbn = {978-1-4503-5416-5},
location = {Mountain View, California, USA},
pages = {17–25},
numpages = {9},
url = {http://doi.acm.org/10.1145/3126686.3126710},
doi = {10.1145/3126686.3126710},
acmid = {3126710},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {deep learning, image aesthetics, image retrieval., photographic composition, portrait photography, smart camera},
}

 

Shape matching for automated bow echo detection

Abstract:
Severe weather conditions cause enormous amount of damages around the globe. Bow echo patterns in radar images are associated with a number of these destructive thunderstorm conditions such as damaging winds, hail and tornadoes. They are detected manually by meteorologists. In this paper, we propose an automatic framework to detect these patterns with high accuracy by introducing novel skeletonization and shape matching approaches. In this framework, first we extract regions with high probability of occurring bow echo from radar images, and apply our skeletonization method to extract the skeleton of those regions. Next, we prune these skeletons using our innovative pruning scheme with fuzzy logic. Then, using our proposed shape descriptor, Skeleton Context, we can extract bow echo features from these skeletons in order to use them in shape matching algorithm and classification step. The output of classification indicates whether these regions include a bow echo with over 97% accuracy.

Full Text: Bow_echo_detection