Digital Signal Processing Reference
In-Depth Information
A Fuzzy Background Modeling Approach for Motion
Detection in Dynamic Backgrounds
Zhenjie Zhao 1 , Thierry Bouwmans 2 , Xuebo Zhang 1,* , and Yongchun Fang 1
1 Institute of Robotics and Automatic Information System,
Nankai University, China
2 Laboratory of Mathematics, Images and Applications,
University of La Rochelle, France
thierry.bouwmans@univ-lr.fr,
{zhaozj,zhangxb,yfang}@robot.nankai.edu.cn
Abstract. Based on Type-2 Fuzzy Gaussian Mixture Model (T2-FGMM) and
Markov Random Field (MRF), we propose a novel background modeling me-
thod for motion detection in dynamic scenes. The key idea of the proposed ap-
proach is the successful introduction of the spatial-temporal constraints into the
T2-FGMM by a Bayesian framework. The evaluation results in pixel level
demonstrate that the proposed method performs better than the sound Gaussian
Mixture Model (GMM) and T2-FGMM in such typical dynamic backgrounds
as waving trees and water rippling.
Keywords: T2-FGMM, MRF, motion detection, dynamic backgrounds.
1
Introduction
Motion detection is commonly utilized as a pre-processing step in such many com-
puter vision tasks as object detection, recognition, tracking, etc. The aim of motion
detection is to separate the foreground (FG) in which we are interested from the back-
ground (BG), where background subtraction is the most commonly utilized method,
surveys in [1, 2, 3]. Among many background subtraction methods, the statistical
method, especially Gaussian Mixture Model (GMM), which is originally proposed by
Stauffer and Grimson [4], is the most famous and effective one. On the basis of
GMM, the authors of [5] introduced some spatial-temporal constraints to enhance the
performance of background subtraction. Although GMM works well in multimodal
background, it cannot yield satisfactory solution for dynamic backgrounds. Based on
this reason, an online auto-regressive model was proposed by Monnet et al. to deal
with dynamic backgrounds [6]. Besides, inspired by biological vision, Mahadevan
and Vasconcelos proposed a new method in [7] for highly dynamic scenes. More
recently, El Baf et al. [8, 9] found that the T2-FGMM [10] presents good performance
for dynamic scene modeling. Unfortunately, spatial-temporal constraints are not
considered in these works.
* Corresponding author.
 
Search WWH ::




Custom Search