Database Reference
In-Depth Information
How it works…
Weka's attributes are an integral part of its data model. Moreover, the later algorithms that
we'll see can be sensitive to which columns are in the dataset. In order to work with only the
attributes that are important, we can hide them or delete them altogether using the functions
in this recipe.
Discovering groups of data using K-Means
clustering
One of the most popular and well-known clustering methods is K-Means clustering. It's
conceptually simple. It's also easy to implement and is computationally cheap. We can get
decent results quickly for many different datasets.
On the downside, it sometimes gets stuck in local optima and misses a better solution.
Generally, K-Means clustering performs best when groups in the data are spatially distinct and
are grouped into separate circles. If the clusters are all mixed, this won't be able to distinguish
them. This means that if the natural groups in the data overlap, the clusters that K-Means
generates will not properly distinguish the natural groups in the data.
Getting ready
For this recipe, we'll need the same dependencies in our project.clj ile that we used in
the Loading CSV and ARFF iles into Weka recipe.
However, we'll need a slightly different set of imports in our script or REPL:
(import [weka.core EuclideanDistance]
[weka.clusterers SimpleKMeans])
For data, we'll use the Iris dataset, which is often used for learning about and testing
clustering algorithms. You can download this dataset from the Weka wiki at http://weka.
wikispaces.com/Datasets or from http://www.ericrochester.com/clj-data-
analysis/UCI/iris.arff . We will load it using load-arff , which was covered in
Loading CSV and ARFF iles into Weka .
 
Search WWH ::




Custom Search