Title | : | Moving Object Segmentation and Panorama Creation for Unconstrained Videos |
Speaker | : | Geethu Miriam Jacob (IITM) |
Details | : | Tue, 28 Aug, 2018 3:00 PM @ A M Turing Hall |
Abstract: | : | In the field of video analytics, processing of unconstrained videos captured using hand-held and moving cameras has recently attracted the attention of researchers. The videos taken on unstable camera platforms are generally shaky (jittery) and thus higher level tasks, such as moving object segmentation and panorama creation, show degraded performance on such shaky videos. These tasks assume a steady and smooth camera motion model and hence fail to provide satisfactory results. In the first seminar, methods for video stabilization and moving object segmentation on jittery/ shaky videos were presented. The second seminar will be a walk-through the frameworks for moving object segmentation and panorama creation designed by us for unconstrained videos. We first present a method for moving object segmentation for jittery videos, where we model the trajectories of motion in Kendall’s Shape Space, stabilize and cluster them as foreground and background trajectories. Each of the submodules of the framework use an appropriately designed cost minimization approach. The trajectory points clustered accurately serve as initialization for the final segmentation of the moving object. Using segmentation error metric for quantitative evaluation, we show that our method is superior to the state-of-the-art techniques. The second part of the talk presents three warping models with a framework proposed for image stitching and panorama creation in unconstrained videos. Our design of the methods for stitching images/frames with large parallax, which employ a novel demon-based edge preserving image registration technique, known as DiffeoWarps, and its mesh-based counterpart DiffeoMeshes, will first be discussed. An additional two-stage warping model using DiffeoMeshes and Green Coordinates, known as GreenWarps will then be presented. Furthermore, given an unconstrained video as input, a panorama generation framework will be presented. Sparse set of representative frames are first selected from the entire set of video frames, to generate the best panorama, based on an MST-based ordering of frames aligned using the proposed warping models. The alignment quality of DiffeoWarps, DiffeoMeshes and GreenWarps are found to be superior, when results are evaluated using standard metrics. Moreover, the qualitative results of the generated panorama show that our methods perform better than the existing panorama generation softwares/algorithms. |