Database Reference
In-Depth Information
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "arete:30000"
},
{
"_id" : 1,
"host" : "arete:30001"
},
{
"_id" : 2,
"host" : "arete:30002"
}
]
}
> config.members[1].host = "foobar:40000"
arete:40000
> rs.reconfig(config)
Now the replica set will identify the new node, and the new node should start to sync
from an existing member.
In addition to restoring via a complete resync, you also have the option of restor-
ing from a recent backup. You'll typically perform backups from one of the secondary
nodes by making snapshots of the data files and then storing them offline. 10 Recovery
via backup is possible only if the oplog within the backup isn't stale relative to the
oplogs of the current replica set members. This means that the latest operation in the
backup's oplog must still exist in the live oplogs. You can use the information pro-
vided by db.getReplicationInfo() to see right away if this is the case. When you do,
don't forget to take into account the time it'll take to restore the backup. If the
backup's latest oplog entry is likely to go stale in the time it takes to copy the backup
to a new machine, then you're better off performing a complete resync.
But restoring from backup can be faster, in part because the indexes don't have to
be rebuilt from scratch. To restore from a backup, copy the backed-up data files to a
mongod data path. The resync should begin automatically, and you can check the logs
or run rs.status() to verify this.
D EPLOYMENT STRATEGIES
You now know that a replica set can consist of up to 12 nodes, and you've been pre-
sented with a dizzying array of configuration options and considerations regarding
failover and recovery. There are a lot of ways you might configure a replica set, but in
this section I'll present a couple that will work for the majority of cases.
The most minimal replica set configuration providing automated failover is the
one you built earlier consisting of two replicas and one arbiter. In production, the
arbiter can run on an application server while each replica gets its own machine. This
configuration is economical and sufficient for many production apps.
10
Backups are discussed in detail in chapter 10.
Search WWH ::




Custom Search