Database Reference
In-Depth Information
66.7%
8718be22-ed20-43ca-95c1-701864bb1e20 rack1
UN 127.0.0.3 71.09 KB 1
66.7%
0039bf7e-4e9c-4028-b3b6-1d027aedc690 rack1
Here, you can see that node 4 (127.0.0.4) is up and joined with the cluster ring.
You must be wondering what happened to node 2 (127.0.0.2)! Once you get node 2
up and running, it automatically replaces the substitute node (i.e., node 4). Whenever
node 2 is up and joins the ring, it takes back ownership from node 4. Check node 2's
server logs for information (see Figure 9-1 )
Figure 9-1 . Taking ownership back from replacement node (i.e., 127.0.0.4)
Data Backup and Restoration
Database backup means regularly keeping a copy in a safe location. In case of a natural
calamity or hardware loss, the backup can be used to restore the database. Because
database scalability and performance should never be at the cost of data loss, Cas-
sandra also provides support for backups and restoration.
Backing up data with Cassandra can be achieved by creating a snapshot. Cassandra
provides a mechanism for collecting data snapshots using the nodetool utility. We
have already discussed this utility in the current and previous chapters. ( Chapter 10
will cover nodetool and other Cassandra-related utilities in detail.) A snapshot of an
entire keyspace can be taken when the cluster is up and running. But restoration is pos-
sible only by taking the cluster node down.
Using nodetool snapshot and sstableloader
Issuing a snapshot command flushes out all the memtable data and copies it on to
the disk and then prepares the hard link with flushed sstables.
 
 
Search WWH ::




Custom Search