Graphics Reference
In-Depth Information
k
× (
n
q
) amatrix B
k
×
q and a
∈ R
∈ R
k neighboring instance vectors, a matrix A
∈ R ( n q ) × 1 are formed. The i th row vector a i
vector w
of the matrix A consists
of the i th nearest neighbor instances x S i
1
×
n
∈ R
,
k , with its elements at
the q missing locations of MVs of x 1 excluded. Each column vector of the matrix B
consists of the values of the j th location of the MVs
1
i
of the k vectors x S i .
(
1
j
q
)
The elements of the vector w are the n
q elements of the instance vector x whose
missing items are deleted. After the matrices A and B and a vector w are formed, the
least squares problem is formulated as
A T z
min
x
||
w
|| 2
(4.65)
T of q MVs can be estimated as
Then, the vector u
= 1 2 ,...,α q )
=
α 1
.
α q
B T z
B T
A T
w
u
=
=
(
)
,
(4.66)
A T
is the pseudoinverse of A T .
where
(
)
Table 4.1 Recent and most well-known imputation methods involving ML techniques
Clustering
Kernel methods
MLP hybrid
[ 4 ]
Mixture-kernel-based iterative estimator
[ 105 ]
Rough fuzzy subspace clustering
[ 89 ]
Nearest neighbors
LLS based
[ 47 ]
ICkNNI
[ 40 ]
Fuzzy c-means with SVR and Gas
[ 3 ]
Iterative KNNI
[ 101 ]
Biclustering based
[ 32 ] CGImpute
[ 22 ]
KNN based
[ 46 ] Boostrap for maximum likelihood
[ 72 ]
Hierarchical Clustering
[ 30 ]
kDMI
[ 75 ]
K2 clustering
[ 39 ]
Ensembles
Weighted K-means
[ 65 ] Random Forest
[ 42 ]
Gaussian mixture clustering
[ 63 ] Decision forest
[ 76 ]
ANNs
Group Method of Data Handling (GMDH)
[ 104 ]
RBFN based
[ 90 ] Boostrap
[ 56 ]
Wavelet ANNs
[ 64 ]
Similarity and correlation
Multi layer perceptron
[ 88 ]
FIMUS
[ 77 ]
ANNs framework
[ 34 ]
Parameter estimation for regression imputation
Self-organizing maps
[ 58 ]
EAs for covariance matrix estimation
[ 31 ]
Generative Topographic Mapping
[ 95 ]
Iterative mutual information imputation
[ 102 ]
Bayesian networks
CMVE
[ 87 ]
Dynamic bayesian networks
[ 11 ] DMI (EM + decision trees)
[ 74 ]
Bayesian networks with weights
[ 60 ] WLLSI
[ 12 ]
 
Search WWH ::




Custom Search