Database Reference
In-Depth Information
sudo -bE /usr/local/cassandra/apache-cassandra- 1.1.6/
bin/nodetool -h 10.3.1.24 compact
mydomain@my-cass3:/home/ubuntu$ sudo -E /usr/local/
cassandra/apache-cassandra-1.1.6/bin/nodetool -h
10.3.1.24 compactionstats
pending tasks: 1
compaction type keyspace column family bytes
compacted bytes total progress
Compaction my_keyspacemycf 1236772 1810648499806 0.00%
Active compaction remaining time:29h58m42s
mydomain@my-cass3:/home/ubuntu$
Cassandra has two types of compactions: minor compaction and major compac-
tion. The minor cycle of compaction gets executed whenever a new sstable
data is created to remove all the tombstones (that is, the deleted entries).
The major compaction is something that's triggered manually, using the preceding
nodetool command. This can be applied to the node, keyspace, and a column
family level.
Decommission : This is, in a way, the opposite of bootstrap and is triggered
when we want a node to leave the cluster. The moment a live node receives the
command, it stops accepting new rights, flushes the memtables , and starts
streaming the data from itself to the nodes that would be a new owner of the key
range it currently owns:
bin/nodetool -h 192.168.1.54 decommission
Removenode : This command is executed when a node is dead, that is, physic-
ally unavailable. This informs the other nodes about the node being unavailable.
Cassandra replication kicks into action to restore the correct replication by creat-
ing copies of data as per the new ring ownership:
bin/nodetoolremovenode<UUID>
bin/nodetoolremovenode force
Repair : This nodetool repair command is executed to fix the data on any
node. This is a very important tool to ensure that there is data consistency and the
nodes that join the cluster back after a period of time exist. Let's assume a cluster
with four nodes that are catering to continuous writes through a storm topology.
Here, one of the nodes goes down and joins the ring again after an hour or two.
Search WWH ::




Custom Search