Digital Signal Processing Reference
In-Depth Information
fundamental concept behind this method is the Robust Principal Component Analysis
which eliminates some of the limitations of convectional PCA.
Robust principal component analysis aims to recover a low rank matrix from
corrupted observations by minimization of the difference between the original matrix
and the low rank approximation [ 47 ]. If we assume that the data all lie near some
low-dimensional subspace, all the data points are stacked as column vectors of a
matrix M . Mathematically,
M
=
L 0 +
N 0
where, L 0 has low-rank and N 0 is a small perturbation matrix. Classical Principal
Component Analysis (PCA) tries to find the best low-dimensional subspace approx-
imation L 0 by solving: minimize
M
L
subject to
(
)
rank
L
k
M
denotes the 2-norm; i.e. the largest singular value of M . Classical PCA
is extremely brittle to the presence of outliers. In other words, even a single corrupted
point can arbitrarily alter the quality of the approximation [ 48 ]. This problem is
overcome by Robust PCA which aims to recover a low-rank matrix L 0 from highly
corrupted observations M . Mathematically,
where,
M
=
L 0 +
S 0
where S 0 can have arbitrarily large magnitude and is assumed to be sparse.
Sparse signal can be exactly recovered from a small number of its random mea-
surements and a low-rank matrix can be exactly completed from a few of its entries
sampled at random. When signals are neither sparse nor low-rank, its low-rank and
sparse structure can be explored by either approximation or decomposition. Here,
the corrupted entries are unknown and the errors can be arbitrarily large, but are
assumed to be sparse. Most matrices can be efficiently and exactly recovered from
most error sign-and-support patterns by solving a simple convex program, for which
we may consider a fast and provably convergent algorithm. The method holds even
when the rank of the matrix grows nearly proportional to the dimensionality of the
observation space and the number of errors grows in proportion to the total number
of entries in the matrix. This method assumes that all the observations lie near some
low-dimensional subspace i.e. if all the observations are stacked as column vectors
of a matrix, the matrix should be of low-rank. Even if the dimension increases, sparse
and low-rank structures can be efficiently and exactly separated from the corrupted
observations.
Tianyi et al. proposed a GoDecomposition (GoDec) algorithm to efficiently and
robustly estimate the low-rank part L and the sparse part S of a matrix X
=
L
+
S
+
G
with noise G [ 49 ]. GoDec alternatively assigns the low-rank approximation of X
S
Search WWH ::




Custom Search