Database Reference
In-Depth Information
servers, then all chunk splits and migrations will be suspended. 15 Fortunately, suspend-
ing sharding operations rarely affects the working of a shard cluster; splitting and
migrating can wait until the lost machine is recovered.
That's the minimum recommend setup for a two-shard cluster. But applications
demanding the highest availability and the fastest paths to recovery will need some-
thing more robust. As discussed in the previous chapter, a replica set consisting of two
replicas and one arbiter is vulnerable while recovering. Having three nodes reduces
the fragility of the set during recovery and also allows you to keep a node in a second-
ary data center for disaster recovery. Figure 9.5 shows a robust two-shard cluster topol-
ogy. Each shard consists of a three-node replica set, where each node contains a
complete copy of the data. For disaster recovery, one config server and one node from
each shard are located in a secondary data center; to ensure that those nodes never
become primary, they're given a priority of 0.
With this configuration, each shard is replicated twice, not just once. Additionally,
the secondary data center has all the data necessary for a user to completely recon-
struct the shard cluster in the event of the failure of the first data center.
Shard A
mongod
(27017)
Shard A
mongod
(27017)
g
server
(27019)
Machine 1
Machine 2
Data
center
(main)
Shard B
mongod
(27017)
Shard B
mongod
(27017)
g
server
(27019)
Machine 4
Machine 3
Shard A
mongod
(27017)
Priority 0
Shard B
mongod
(27017)
Priority 0
Data
center
(recovery)
g
server
(27019)
Figure 9.5 A two-
shard cluster deployed
across six machines
and two data centers
Machine 5
Machine 6
15
All three config servers need to be online for any sharding operations to take place.
Search WWH ::




Custom Search