Database Reference
In-Depth Information
{
"msg" : "draining started successfully",
"state" : "started",
"shard" : "shard-1-test-rs",
"ok" : 1 }
The command response indicates that chunks are now being drained from the shard
to be relocated to other shards. You can check the status of the draining process by
running the command again:
> db.runCommand({removeshard: "shard-1/arete:30100,arete:30101"})
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : 376,
"dbs" : 3
},
"ok" : 1 }
Once the shard is drained, you also need to make sure that no database's primary
shard is the shard you're going to remove. You can check database shard membership
by querying the
config.databases
collection:
> use config
> db.databases.find()
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "cloud-docs", "partitioned" : true, "primary" : "shardA" }
{ "_id" : "test", "partitioned" : false, "primary" : "shardB" }
What you see here is that the
cloud-docs
database is owned by
shardB
but the
test
database is owned by
shardA
. Since you're removing
shardB
, you need to change the
test database's primary node. You can accomplish that with the
moveprimary
command:
> db.runCommand({moveprimary: "test", to: "shard-0-test-rs" });
Run this command for each database whose primary is the shard to be removed. Then
run the
removeshard
command again to very the shard is completely drained:
> db.runCommand({removeshard: "shard-1/arete:30100,arete:30101"})
{ "msg": "remove shard completed successfully",
"stage": "completed",
"host": "arete:30100",
"ok" : 1
}
Once you see that the removal is completed, it's safe to take the removed shard
offline.
U
NSHARDING
A
COLLECTION
Although you can remove a shard, there's no official way to unshard a collection. If
you do need to unshard a collection, your best option is to dump the collection and
then restore the data to a new collection with a different name.
17
You can then drop
17
The utilities you use to dump and restore,
mongodump
and
mongorestore
, are covered in the next chapter.