Big Visual Data Computing, Image Retrieval and Deep Learning, Smart Cropping, Aesthetic Scoring, Portrait Photography, Computer Vision and Pattern Recognition, Distributed and Parallel Systems, Network Security
ABSTRACT: Retrieving photography ideas corresponding to a given location facilitates the usage of smart cameras, where there is a high interest among amateurs and enthusiasts to take astonishing photos at anytime and in any location. Existing research captures some aesthetic techniques such as the rule of thirds, triangle, and perspectiveness, and retrieves useful feedbacks based on one technique. However, they are restricted to a particular technique and the retrieved results have room to improve as they can be limited to the quality of the query. There is a lack of a holistic framework to capture important aspects of a given scene and give a novice photographer informative feedback to take a better shot in his/her photography adventure. This work proposes an intelligent framework of portrait composition using our deep-learned models and image retrieval methods. A highly-rated web-crawled portrait dataset is exploited for retrieval purposes. Our framework detects and extracts ingredients of a given scene representing as a correlated hierarchical model. It then matches extracted semantics with the dataset of aesthetically composed photos to investigate a ranked list of photography ideas, and gradually optimizes the human pose and other artistic aspects of the composed scene supposed to be captured. The conducted user study demonstrates that our approach is more helpful than the other constructed feedback retrieval systems.
Given any aspect ratio, oval shape, or automatically AutoThumbGen generates a thumbnail based on the input image. In fact, the most prominent part of the input image is recognized and captured by the app with a proper thumbnail size. The source code is in C. The code has been also embedded in Android via JNI and PHP by exec.
The contributors: Jia Li, Farshid Farhat, James Wang.
Multi-dimensional spatial analysis of image pixels have not been much investigated for the steganalysis of the LSB Steganographic methods. Pixel distribution based steganalysis methods could be thwarted by intelligently compensating statistical characteristics of image pixels, as reported in several papers. Simple LSB replacement methods have been improved by introducing smarter LSB embedding approaches, e.g. LSB matching and LSB+ methods, but they are basically the same in the sense of the LSB alteration. A new analytical method to detect LSB stego images is proposed in this paper. Our approach is based on the relative locations of image pixels that are essentially changed in an LSB embedding system. Furthermore, we introduce some new statistical features including “local entropies sum” and “clouds min sum” to achieve a higher performance. Simulation results show that our proposed approach outperforms some well-known LSB steganalysis methods, in terms of detection accuracy and the embedding rate estimation.