In order to add or remove shard there are 2 major questions which has to be addressed:
As you can see, we are interested to know what happens with the shareded data, when we manipulate the cluster. So, in nutshell, Mongo will operate as follows:
When you add a shard, Mongo will redistribute the data among the all nodes including the new one. Also, when you remove a node, Mongo will again redistribute the data among the surviving nodes.
So let's get started:
Addition of a node, we have already done that, but let's test it again. For simplicity purposes, I will start a new Mongo instance on the: lpardmongo server, just it will use a different port:
[root@lpardmongo app]# mongod --port 9006 --dbpath /app/mongo2/ --logpath /app/mongo2/mongo.log --fork about to fork child process, waiting until server is ready for connections. forked process: 18140 child process started successfully, parent exiting [root@lpardmongo app]#
This will create an instance, which will listen on port: 9006 and will have its designation data directory and log directory:
So let's add it to our cluster. Addition / Removal is ALWAYS done from the shard controller, when it is planned of course :).
[root@lparbmongo mongo2]# mongo --port 9005 MongoDB shell version: 2.6.12 connecting to: 127.0.0.1:9005/test mongos> db.printShardingStatus() --- Sharding Status --- sharding version: { "_id" : 1, "version" : 4, "minCompatibleVersion" : 4, "currentVersion" : 5, "clusterId" : ObjectId("5ac61332fa1e510ac2fa7fe1") } shards: { "_id" : "shard0000", "host" : "lparcmongo:9005" } { "_id" : "shard0001", "host" : "lpardmongo:9005" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : false, "primary" : "shard0001" } { "_id" : "ShardDB", "partitioned" : true, "primary" : "shard0001" } ShardDB.shardCollection shard key: { "shardke" : 1 } chunks: shard0001 1 { "shardke" : { "$minKey" : 1 } } -->> { "shardke" : { "$maxKey" : 1 } } on : shard0001 Timestamp(1, 0) mongos> sh.addShard("lpardmongo:9006") { "shardAdded" : "shard0002", "ok" : 1 } mongos> db.printShardingStatus() --- Sharding Status --- sharding version: { "_id" : 1, "version" : 4, "minCompatibleVersion" : 4, "currentVersion" : 5, "clusterId" : ObjectId("5ac61332fa1e510ac2fa7fe1") } shards: { "_id" : "shard0000", "host" : "lparcmongo:9005" } { "_id" : "shard0001", "host" : "lpardmongo:9005" } { "_id" : "shard0002", "host" : "lpardmongo:9006" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : false, "primary" : "shard0001" } { "_id" : "ShardDB", "partitioned" : true, "primary" : "shard0001" } ShardDB.shardCollection shard key: { "shardke" : 1 } chunks: shard0001 1 { "shardke" : { "$minKey" : 1 } } -->> { "shardke" : { "$maxKey" : 1 } } on : shard0001 Timestamp(1, 0)
So we have added a new node to the cluster, how about removing it:
As we already said, node addition and node removal are pretty easy and related. Mongo will again redistributed the data on the deleted node among the surviving. Because of that, sometimes we might need to wait, for the redistribution to complete:
mongos> use admin switched to db admin mongos> db.runCommand({removeShard:"lpardmongo:9006"}) { "msg" : "draining started successfully", "state" : "started", "shard" : "shard0002", "ok" : 1 }
We can run the same command to see the progress:
mongos> db.runCommand({removeShard:"lpardmongo:9006"}) { "msg" : "removeshard completed successfully", "state" : "completed", "shard" : "shard0002", "ok" : 1 } mongos>