Cross-domain Generative Models Applied to Cartoon Series

Cartooning for Enhanced Privacy in Lifelogging and Streaming Videos

We propose automatic ‘cartooning’ transforms to enhance privacy in live-streamed and first-person imagery. Much in the way animated movies abstract away the details of the real world to convey only the most important semantic elements, cartooning transformations can obscure private details of videos while still retaining the overall ‘story.’ Parameters of the algorithms can be adjusted to control the aggressiveness of the transformations.As a first step, we develop an initial automatic algorithm for transforming videos into cartoon-like representations, applying several types of image processing and computer vision techniques. The algorithm has two major components. The first is to apply image processing to abstract out visual details of the whole scene in an object-independent way. The second detects certain objects and replaces them with clip art images that convey general attributes of the object but not the fine-grained details. We address the significant challenge of how to automatically select, align, and integrate the clip art into the scene in an aesthetically pleasing way. The combination of these two components has several advantages over using either one individually: (1) background details are removed while the presence (but not details) of certain sensitive objects are highlighted through clip art, and (2) some degree of privacy preservation is ensured by the image processing transform even when the system fails to replace a sensitive object properly

Cartoon Summery Generation for Ego-centric videos

Image Inpainting Based on image segmentation and segment classification

In this paper, a new inpainting algorithm is proposed that firstly segments the source region using mean-shift segmentation technique. Then, it classifies the segments that are adjacent to the missing region based on their perimeter relative percentage, described in sub-section II-B, to be either large segment inpainting problem (sub-sectionII-D) or nonuniform segments inpainting problem (sub-sectionII-E) and inpaint each of them independently. Since human eye is very sensitive to any produced artifacts in large uniform regions, the algorithm invests more effort to inpaint large uniform regions. On the other hand, it inpaints non-uniform regions differently as human eye is less sensitive to any produced artifacts of nonuniform regions. As the main idea of the proposed technique is that if the algorithm is going to produce an error, then it has to be un-noticeable by the user. The experimental results show the efficiency of the proposed algorithm to produce more plausible output.