Database Reference
In-Depth Information
properties of files on all the nodes when a new node joins or leaves). Here is what
a cassandra-rackdc.properties file looks like:
# indicate the rack and dc for this node
dc=DC13
rack=RAC42
RackInferringSnitch : This snitch infers the location of a node based on its
IP address. It uses the third octet to infer rack name, and the second octet to as-
sign data center. If you have four nodes 10.110.6.30 , 10.110.6.4 ,
10.110.7.42 , and 10.111.3.1 , this snitch will think the first two live on
the same rack as they have the same second octet (110) and the same third octet
(6), while the third lives in the same data center but on a different rack as it has
the same second octet but the third octet differs. Fourth, however, is assumed to
live in a separate data center as it has a different second octet than the three.
EC2Snitch : This is meant for Cassandra deployments on Amazon EC2 service.
EC2 has regions and within regions, there are availability zones. For example, us-
east-1e is an availability zone in the us-east region with availability zone named
1e. This snitch infers the region name (us-east, in this case) as the data center and
availability zone (1e) as the rack.
EC2MultiRegionSnitch : The multi-region snitch is just an extension of
EC2Snitch where data centers and racks are inferred the same way. But you
need to make sure that broadcast_address is set to the public IP provided
by EC2 and seed nodes must be specified using their public IPs so that inter-data
center communication can be done.
DynamicSnitch : This Snitch determines closeness based on a recent perform-
ance delivered by a node. So, a quick responding node is perceived as being
closer than a slower one, irrespective of its location closeness, or closeness in the
ring. This is done to avoid overloading a slow performing node. DynamicSn-
itch is used by all the other snitches by default. You can disable it, but it is not
advisable.
Now, with knowledge about snitches, we know the list of the fastest nodes that have the
desired row keys, it's time to pull data from them. The coordinator node (the one that the
client is connected to) sends a command to the closest node to perform a read (we'll dis-
cuss local reads in a minute) and return the data. Now, based on ConsistencyLevel ,
other nodes will send a command to perform a read operation and send just the digest of
the result. If we have read repairs (discussed later) enabled, the remaining replica nodes
will be sent a message to compute the digest of the command response.
Search WWH ::




Custom Search