Database Reference
In-Depth Information
Cassandra cluster - replacing a dead node
This section captures the various situations and scenarios that can occur and cause failures
in a Cassandra cluster. We will also equip you with the knowledge and talk about the steps
to handle these situations. These situations are specific to version 1.1.6 but can be applied
to others as well.
Say, this is the problem: you're running an n node, for example let's say there are three node
clusters and from that one node goes down; this will result in unrecoverable hardware fail-
ure. The solution is this: replace the dead nodes with new nodes.
The following are the steps to achieve the solution:
1. Confirm the node failure using the nodetool ring command:
bin/nodetool ring -h hostname
2. The dead node will be shown as DOWN ; let's assume node3 is down:
192.168.1.54 datacenter1rack1 Up Normal 755.25 MB
50.00% 0
192.168.1.55 datacenter1rack1 Down Normal 400.62 MB
25.00% 42535295865117307932921825928971026432
192.168.1.56 datacenter1rack1 Up Normal 793.06 MB
25.00% 85070591730234615865843651857942052864
3. Install and configure Cassandra on the replacement node. Make sure we remove
the old installation, if any, from the replaced Cassandra node using the following
command:
sudorm -rf /var/lib/cassandra/*
Here, /var/lib/cassandra is the path of the Cassandra data directory for
Cassandra.
4. Configure Cassandra.yaml so that it holds the same non-default settings as
that of the pre-existing Cassandra cluster.
5. Set the initial_token range in the cassandra.yaml file of the replace-
ment node to the value of the dead node's token 1, that is,
42535295865117307932921825928971026431 .
Search WWH ::




Custom Search