Database Reference
In-Depth Information
INFO 11:45:43,652 Starting up server gossip
INFO 11:45:43,886 Joining: getting load information
INFO 11:45:43,901 Sleeping 90000 ms to wait for load information...
INFO 11:45:45,742 Node /192.168.1.5 is now part of the cluster
INFO 11:45:46,818 InetAddress /192.168.1.5 is now UP
INFO 11:45:46,818 Started hinted handoff for endPoint /192.168.1.5
INFO 11:45:46,865 Finished hinted handoff of 0 rows to endpoint /192.168.1.5
INFO 11:47:13,913 Joining: getting bootstrap token
INFO 11:47:16,004 New token will be 41707658470746813056713124104091156313 to a
ssume load from /192.168.1.5
INFO 11:47:16,019 Joining: sleeping 30000 ms for pending range setup
INFO 11:47:46,034 Bootstrapping
Depending on how much data you have, you could see your new node in this state for some time.
You can use Nodetool's streams command to watch the data being transferred for bootstrap.
Watching the logfile is a good way to determine that the node is inishedbootstrapping, but to
watch for progress while it's happening, use nodetool streams . Eventually, the new node will
accept the load from the first node, and you'll see a successful indication that the new node has
started up:
INFO 11:52:29,361 Sampling index for /var/lib/cassandra/data\Keyspace1\Standard
1-1-Data.db
INFO 11:52:34,073 Streaming added /var/lib/cassandra/data\Keyspace1\Standard1-1
-Data.db
INFO 11:52:34,088 Bootstrap/move completed! Now serving reads.
INFO 11:52:34,354 Binding thrift service to /192.168.1.7:9160
INFO 11:52:34,432 Cassandra starting up...
As you can see, it took around four minutes to transfer data.
During bootstrapping, the first (seed) node at 1.5 looks like this:
INFO 11:48:12,955 Sending a stream initiate message to /192.168.1.7 ...
INFO 11:48:12,955 Waiting for transfer to /192.168.1.7 to complete
INFO 11:52:28,903 Done with transfer to /192.168.1.7
Now we can run node tool again to make sure that everything is set up properly:
$ bin/nodetool -h 192.168.1.5 ring
Address Status Load Range Ring
126804671661649450065809810549633334036
192.168.1.7 Up 229.56 MB 41707658470746813056713124104091156313 |<--|
192.168.1.5 Up 459.26 MB 126804671661649450065809810549633334036 |-->|
Cassandra has automatically bootstrapped the 1.7 node by sending it half of the data from the
previous node (1.5). So now we have a two-node cluster. To ensure that it works, let's add a
value to the 1.5 node:
Search WWH ::




Custom Search