Published on Mon Oct 10 2016

EM-Based Mixture Models Applied to Video Event Detection

Alessandra Martins Coelho, Vania V. Estrela

Surveillance system (SS) development requires hi-tech support to prevail over shortcomings related to the massive quantity of visual information from SSs. Anything but reduced human monitoring became impossible by means of its physical and economic implications, and an advance towards an automated surveillance becomes the only way out.

0
0
0
Abstract

Surveillance system (SS) development requires hi-tech support to prevail over the shortcomings related to the massive quantity of visual information from SSs. Anything but reduced human monitoring became impossible by means of its physical and economic implications, and an advance towards an automated surveillance becomes the only way out. When it comes to a computer vision system, automatic video event comprehension is a challenging task due to motion clutter, event understanding under complex scenes, multilevel semantic event inference, contextualization of events and views obtained from multiple cameras, unevenness of motion scales, shape changes, occlusions and object interactions among lots of other impairments. In recent years, state-of-the-art models for video event classification and recognition include modeling events to discern context, detecting incidents with only one camera, low-level feature extraction and description, high-level semantic event classification, and recognition. Even so, it is still very burdensome to recuperate or label a specific video part relying solely on its content. Principal component analysis (PCA) has been widely known and used, but when combined with other techniques such as the expectation-maximization (EM) algorithm its computation becomes more efficient. This chapter introduces advances associated with the concept of Probabilistic PCA (PPCA) analysis of video event and it also aims at looking closely to ways and metrics to evaluate these less intensive EM implementations of PCA and KPCA.

Sun Jun 14 2020
Computer Vision
Hyper RPCA: Joint Maximum Correntropy Criterion and Laplacian Scale Mixture Modeling On-the-Fly for Moving Object Detection
Robust Principal Component Analysis (RPCA) aims to separate the temporally varying (i.e.,moving) foreground objects from the static background in video. We show that such assumptions can be too restrictive in practice, which limits the effectiveness of the classic RPCA. We propose a
0
0
0
Thu Oct 11 2012
Machine Learning
Unsupervised Detection and Tracking of Arbitrary Objects with Dependent Dirichlet Process Mixtures
This paper proposes a technique for the unsupervised detection and tracking of arbitrary objects in videos. The technique uses a dependent Dirichlet process mixture known as the Generalized Polya Urn (GPUDDPM) to model image pixel data.
0
0
0
Sun Feb 19 2017
Machine Learning
Online Robust Principal Component Analysis with Change Point Detection
Online moving window robust principal component analysis (OMWRPCA) can track not only slowly changing subspace but also abruptly changed subspace. By embedding hypothesis testing into the algorithm, OMWRPCA can detect change points of the underlying subspaces.
0
0
0
Fri Feb 09 2018
Computer Vision
Video Event Recognition and Anomaly Detection by Combining Gaussian Process and Hierarchical Dirichlet Process Models
Unsupervised learning framework for analyzing activities and interactions in surveillance videos. Three levels of video events are connected by Hierarchical Dirichlet Process (HDP) model. Atomic activities are represented as distribution of low-level features, while complicated interactions are represented by distribution of atomic activities.
0
0
0
Wed Apr 30 2014
Computer Vision
Dynamic Mode Decomposition for Real-Time Background/Foreground Separation in Video
The method is a novel application of atechnique used for characterizing nonlinear dynamical systems. It decomposes the state of the system into low-rank terms whose Fourier components in time are known. DMD terms with Fourier frequencies near the origin (zero-modes)
0
0
0
Sun May 12 2019
Computer Vision
On Flow Profile Image for Video Representation
Video representation is a key challenge in many computer vision applications such as video classification, video captioning, and video surveillance. We propose a novel approach for video representation that captures meaningful information including motion and appearance from a sequence of video frames.
0
0
0