Database Reference
In-Depth Information
8.4.4
Tagging
If you're using either write concern or read scaling, you may find yourself wanting
more granular control over exactly which secondaries receive writes or reads. For
example, suppose you've deployed a five-node replica set across two data centers, NY
and FR . The primary data center, NY , contains three nodes, and the secondary data
center, FR , contains the remaining two. Let's say that you want to use write concern to
block until a certain write has been replicated to at least one node in data center FR .
With what you know about write concern right now, you'll see that there's no good
way to do this. You can't use a w value of majority , since this will translate into a value
of 3, and the most likely scenario is that the three nodes in NY will acknowledge first.
You could use a value of 4, but this won't hold up well if, say, you lose one node from
each data center.
Replica set tagging solves this problem by allowing you to define special write con-
cern modes that target replica set members with certain tags. To see how this works,
you first need to learn how to tag a replica set member. In the config document, each
member can have a key called tags pointing to an object containing key-value pairs.
Here's an example:
{
"_id" : "myapp",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "ny1.myapp.com:30000",
"tags": { "dc": "NY", "rackNY": "A" }
},
{
"_id" : 1,
"host" : "ny2.myapp.com:30000",
"tags": { "dc": "NY", "rackNY": "A" }
},
{
"_id" : 2,
"host" : "ny3.myapp.com:30000",
"tags": { "dc": "NY", "rackNY": "B" }
},
{
"_id" : 3,
"host" : "fr1.myapp.com:30000",
"tags": { "dc": "FR", "rackFR": "A" }
},
{
"_id" : 4,
"host" : "fr2.myapp.com:30000",
"tags": { "dc": "FR", "rackFR": "B" }
}
],
settings: {
Search WWH ::




Custom Search