Information Technology Reference
In-Depth Information
broader range of MR images as it is better than the conventional transforms like
Discrete Cosine Transform and Discrete Wavelet transform.
We reviewed various CS recovery algorithms. Matrix and vector processing is
most suited to implement various computational functions. The paper contributes in
identifying and implementing various computational functions required for VLSI
implementation. The functional verification of the VLSI implementation was done by
using LAPACK/LINPACK/TNT (Template Numerical Toolkit) software wherever
required.
The paper is organized as follows. The section 2 give Computational Functions,
Section 3 give Design and Implementation and Section 4 give results. The conclusion
and future scope is presented in section 5.
2
Computational Functions
The review of various CS recovery algorithms [1-10], highlight use of L 1 norm of
vector, L 2 norm of vector, matrix SVD function as basis for implementing various CS
algorithms. The current work focuses on following CS recovery algorithms: L 1 -
minimization, convex regularization [2-4], iterative thresholding methods [6-7] and
Singular Value Decomposition (SVD) [8-10].
The computational functions are defined as follows: L 1 norm of vector = Σ i |x i |, L 2
norm of vector = Squareroot (Σ i |x i | 2 ) and Matrix SVD includes solving least square
problem (eq. 1), i.e. computing the following equation:
x = arg min||Ax - y|| 2
(2)
To compute equation (2), SVD is computed as factorization of matrix A (= USV T ).
U, S, V denote matrix factorization of A
There are two methods for computing the SVD: Bi-diagonal form with QR
algorithm and Jacobi rotation method. The QR algorithm is computationally much
more efficient than the Jacobi method. On the other hand, Jacobi methods exhibit
much more inherent parallelism than the QR. The review indicates SVD FPGA
implementation [11-15] using Jacobi rotation. Jacobi SVD [15] analyzes small and
mid sizes matrices around 8X8 size. The Jacobi method works well for real
symmetric matrices. However the algorithm is slower for matrices of order greater
than about 10, by a significant constant factor, than the Bi-diagonal form with QR
method [16].
Compressed sensing recovery normally deals with larger matrix sizes. We require
SVD method dealing for dense, real non-symmetric matrices which is not the case
with Jacobi method. The accuracy requirement is also important and requires floating
point operations. We propose SVD calculation as Householder's reduction to Bi-
diagonal form (Bi-diagonalization) followed with QR algorithm. The Householder's
reduction [17, 18] reduces a matrix to bi-diagonal form by repeated transformation.
Transformation annihilates the required part of whole column and whole
corresponding row by using a Householder matrix of the form P = 1 -2w.w T .
Search WWH ::




Custom Search