======Overview======
Backup and restore of MongoDB can be achieved in one of the following ways:
* Physical backup of the disk(via. snapshot, rsync or others)
* Using Mongo utilities(mongodump,mongorestore,mongoexport,mongoimport)
Let's start with Mongo utilities:
======Mongo Utilities======
The mongo utilities, like most of the databases come in pairs:
* Mongodump - Mongorestore (Viable for recovery strategy)
* Mongoexport - Mongoimport (Not Viable for recovery Strategy)
Therefore, backup done with mongodump, has to be restored via mongorestore. Mongodump is able to do backup of:
* Entire Server
* Particular Database
* Particular Collection
=====Backup=====
Let's backup a single database as follows:
root# mongodump \
--port 9005 \ <- Port
--db hunter_dev <- Database Name
--out /home/hunter/backups/`date +"%m-%d-%y"` <- Location
The following command will create a directory, which will contain all the collections inside this database.
[root@tain-mt-fhuat1 backups]# mongodump --port 9005 --db monitor --out /home/hunter/backups/`date +"%m-%d-%y"`
connected to: 127.0.0.1:9005
2018-05-14T13:19:15.527+0200 DATABASE: monitor to /home/hunter/backups/05-14-18/monitor
2018-05-14T13:19:15.545+0200 monitor.system.indexes to /home/hunter/backups/05-14-18/monitor/system.indexes.bson
2018-05-14T13:19:15.546+0200 3 documents
2018-05-14T13:19:15.546+0200 monitor.monitors to /home/hunter/backups/05-14-18/monitor/monitors.bson
2018-05-14T13:19:15.559+0200 0 documents
2018-05-14T13:19:15.560+0200 Metadata for monitor.monitors to /home/hunter/backups/05-14-18/monitor/monitors.metadata.json
2018-05-14T13:19:15.560+0200 monitor.config to /home/hunter/backups/05-14-18/monitor/config.bson
2018-05-14T13:19:15.560+0200 1 documents
2018-05-14T13:19:15.560+0200 Metadata for monitor.config to /home/hunter/backups/05-14-18/monitor/config.metadata.json
[root@tain-mt-fhuat1 backups]#
[root@tain-mt-fhuat1 backups]# ls -lart
total 4
drwx------. 4 hunter hunter 4096 Mar 20 17:45 ..
drwxr-xr-x. 3 root root 22 May 14 13:19 .
drwxr-xr-x. 4 root root 39 May 14 13:19 05-14-18
Backup records older than a certain date
mongodump -d database_name -c collection_name -o dump_3mth -q "{\"d\":{\"\$gt\":{\"\$date\":`date -d 2012-12-18 +%s`000}}}" -vvvvvvv
You can also automate it in a script as follows
===Backup script===
#!/bin/bash
##########################################
# Edit these to define source and destinations
MONGO_DBS=""
BACKUP_TMP=~/tmp
BACKUP_DEST=/apphome/backup/mongo
MONGODUMP_BIN=/usr/bin/mongodump
TAR_BIN=/usr/bin/tar
##########################################
BACKUPFILE_DATE=`date +%Y%m%d-%H%M`
# _do_store_archive
function _do_store_archive {
mkdir -p $3
cd $2
tar -cvzf $3/$4 dump
}
# _do_backup
function _do_backup {
UNIQ_DIR="$BACKUP_TMP/$1"`date "+%s"`
mkdir -p $UNIQ_DIR/dump
echo "dumping Mongo Database $1"
if [ "all" = "$1" ]; then
$MONGODUMP_BIN -u 'adminDBA' -p 'password' --port 9005 -o $UNIQ_DIR/dump
else
$MONGODUMP_BIN -u 'adminDBA' -p 'password' --port 9005 -d $1 -o $UNIQ_DIR/dump
fi
KEY="database-$BACKUPFILE_DATE.tgz"
echo "Archiving Mongo database to $BACKUP_DEST/$1/$KEY"
DEST_DIR=$BACKUP_DEST/$1
_do_store_archive $1 $UNIQ_DIR $DEST_DIR $KEY
rm -rf $UNIQ_DIR
}
# check to see if individual databases have been specified, otherwise backup the whole server
# to "all"
if [ "" = "$MONGO_DBS" ]; then
MONGO_DB="all"
_do_backup $MONGO_DB
else
for MONGO_DB in $MONGO_DBS; do
_do_backup $MONGO_DB
done
fi
You can also use more versatily syntax as follows:
###Backup mongo###
#!/bin/bash
HOST=${HOST:-"$(hostname -s)"}
BACKUP_HOST=${BACKUP_HOST:-"mongo@tbp-mts-backup01"}
BACKUP_DIR=${BACKUP_DIR:-"/backup/db/mongodb"}
MONGO=$(which mongo)
MONGODUMP=$(which mongodump)
TAR=$(which tar)
SSH=$(which ssh)
MONGO_USER=${MONGO_USER:-"adminDBA"}
MONGO_PASSWORD=${MONGO_PASSWORD:-""}
MONGO_HOST=${MONGO_HOST:-"localhost"}
MONGO_PORT=${MONGO_PORT:-"27017"}
DATE="$(date +%F_%H-%M)"
STAGING_DIR=${STAGING_DIR:-"$(pwd)/dump"}
DUMP_DIR="${STAGING_DIR}/${DATE}"
function backup(){
${MONGODUMP} ${USER_OPTS} --host ${MONGO_HOST} --port ${MONGO_PORT} -o ${DUMP_DIR}
}
function pack_db() {
local of
of="$@_backup_${DATE}.tar.bzip2"
${TAR} -cj -C ${DUMP_DIR} -f ${STAGING_DIR}/${of} $@
}
function pack() {
local databases
databases=$(cd ${DUMP_DIR} && ls -1)
for db in $databases ; do
pack_db $db
done
}
function upload(){
files=$(ls -1 ${STAGING_DIR}/*.bzip2)
rsync -a $files $BACKUP_HOST:$BACKUP_DIR/$HOST/
}
USER_OPTS="-u ${MONGO_USER} -p ${MONGO_PASSWORD}"
backup
pack
upload
===crontab===
[root@server scripts]# crontab -l
0 0 * * 1-5 /bin/sh /app/mongo/hunterServer/scripts/backupMongo.sh >> /apphome/backup/mongo/backup.log
[root@server scripts]#
----
=====Restore=====
To restore such backup, we have to use the mongorestore command, but before that, be sure to tar and compress the folder:
* Compress the backup folder
* Move the tar archive wherever you need
[root@servername]# tar -zcvf hunter_dev.tar.gz hunter_dev/
hunter_dev/
hunter_dev/system.indexes.bson
hunter_dev/preTransCredit.bson
hunter_dev/preTransCredit.metadata.json
hunter_dev/transCredit.bson
hunter_dev/transCredit.metadata.json
hunter_dev/billGame.bson
hunter_dev/billGame.metadata.json
hunter_dev/billHit.bson
hunter_dev/billHit.metadata.json
hunter_dev/billJpContri.bson
hunter_dev/billJpContri.metadata.jso
This command will produce a tar.gz file which you can migrate more easily.
The restore is done via the mongorestore utility as follows:
=== Particular DB ===
[root@servername]# mongorestore
\ --port 9005
\ --db sales
\ --drop /app/oracle/mongobackups/03-20-18/sales/
connected to: 127.0.0.1:9005
2018-03-20T17:43:50.660+0100 /app/oracle/mongobackups/03-20-18/sales/accounts.bson
2018-03-20T17:43:50.660+0100 going into namespace [sales.accounts]
2018-03-20T17:43:50.660+0100 dropping
1 objects found
2018-03-20T17:43:50.682+0100
Creating index: { key: { _id: 1 }, name: "_id_", ns: "sales.accounts" }
[root@lpara sales]
You can specify different authentication database as well:
mongorestore
\ -u 'adminDBA'
\ -p 'password'
\ --authenticationDatabase admin
\ --port 9005
\ --db hunter_dev
\ --drop /app/mongo/backups/05-22-19/hunter_dev
===Relocate DB===
You can also redirect a database restore, like: restore the data in Database A -> B, as folows:
[root@tbp-mts-uatstdkr01 bulk_copied]# docker exec -i b2dcec2eba9d sh -c 'mongorestore --username root --password="password" --authenticationDatabase admin -d=B --gzip --drop /data/db/dump/A';
2020-04-06T12:41:46.599+0000 the --db and --collection args should only be used when restoring from a BSON file. Other uses are deprecated and will not exist in the future; use --nsInclude instead
2020-04-06T12:41:46.603+0000 building a list of collections to restore from /data/db/dump/incident dir
2020-04-06T12:41:46.641+0000 reading metadata for studio.audit.oldtrails from /data/db/dump/incident/audit.oldtrails.metadata.json.gz
2020-04-06T12:41:46.661+0000 reading metadata for studio.rounds from /data/db/dump/incident/rounds.metadata.json.gz
2020-04-06T12:41:46.682+0000 reading metadata for studio.audit.trails from /data/db/dump/incident/audit.trails.metadata.json.gz
2020-04-06T12:41:46.684+0000 reading metadata for studio.rounds.requests from /data/db/dump/incident/rounds.requests.metadata.json.gz
2020-04-06T12:41:46.703+0000 restoring studio.audit.oldtrails from /data/db/dump/incident/audit.oldtrails.bson.gz
2020-04-06T12:41:46.707+0000 restoring studio.rounds from /data/db/dump/incident/rounds.bson.gz
2020-04-06T12:41:46.924+0000 restoring studio.rounds.requests from /data/db/dump/incident/rounds.requests.bson.gz
2020-04-06T12:41:46.925+0000 restoring studio.audit.trails from /data/db/dump/incident/audit.trails.bson.gz
===Particular collation===
You can also restore a particular collation(table) IF YOU POINT TO A FOLDER WITHOUT A NESTED FOLDERS :)
mongorestore
\ -u 'julien'
\ -p 'password'
\ -authenticationDatabase test
\ -c collectionName
\ --drop /home/backups/07-03-19/test
Last one means that "/home/backups/07-03-19/test" contains ONLY that collation.
In case the backup has been taken with the "--gzip" option it should be restored with the same:
root@b2dcec2eba9d:/data/db/dump/incident# mongorestore --username root --password="password" -d incident --authenticationDatabase admin --gzip --drop /data/db/dump/incident
2020-01-20T10:14:54.470+0000 the --db and --collection args should only be used when restoring from a BSON file. Other uses are deprecated and will not exist in the future; use --nsInclude instead
2020-01-20T10:14:54.470+0000 building a list of collections to restore from /data/db/dump/incident dir
2020-01-20T10:14:54.563+0000 reading metadata for incident.files.chunks from /data/db/dump/incident/files.chunks.metadata.json.gz
2020-01-20T10:14:54.585+0000 reading metadata for incident.audit.trails from /data/db/dump/incident/audit.trails.metadata.json.gz
***************************************************************************************
2020-01-20T11:39:57.415+0000 [#######################.] incident.files.chunks 208GB/208GB (99.9%)
2020-01-20T11:40:00.411+0000 [#######################.] incident.files.chunks 208GB/208GB (100.0%)
2020-01-20T11:40:02.398+0000 [########################] incident.files.chunks 208GB/208GB (100.0%)
2020-01-20T11:40:02.398+0000 restoring indexes for collection incident.files.chunks from metadata
2020-01-20T11:40:02.449+0000 finished restoring incident.files.chunks (898117 documents)
2020-01-20T11:40:02.449+0000 done
Please bare in mind, that the database accessability will be VERY LIMITED as the entire server will be locked.