MongoDB has very similar features with RDBMS database. Like any other DB, it can be shutdown and started up for example:
You can start MongoDB from a file as follows:
[root@tain-cx-mdb1 scripts]# ./startMongoDB.sh about to fork child process, waiting until server is ready for connections. forked process: 67867 child process started successfully, parent exiting
Where the script contains:
startMongoDB
#!/bin/bash sudo /bin/mongod --auth -f /app/mongo/hunterServer/conf/mongo.conf --fork
and the contents of the configuration file is:
mongo.conf
port = 9005 logpath = /var/mongodb/logs/mongodb.log logappend = true pidfilepath = /var/mongodb/run/mongodb.pid unixSocketPrefix = /var/mongodb/db dbpath = /var/mongodb/db smallfiles = true maxConns = 16000
The database can also be shutdown as follows:
~/mongodb/bin/mongo localhost:9005/admin --eval "db.shutdownServer()"
or you can do it manually using
mongo admin -u 'adminDBA' -p 'password' --port 9005 --eval "db.shutdownServer()"
The shell can be set to more readable manner as follows:
prompt = function() { user = db.runCommand({connectionStatus:1}).authInfo.authenticatedUsers[0] host = db.getMongo().toString().split(" ")[2] curDB = db.getName() if (user) { uname = user.user } else { uname = "local" } return uname + "@" + host + ":" + curDB + "> " } [email protected]:9005:admin> [email protected]:9005:admin> [email protected]:9005:admin> ^ ^ ^ ^ | | | | Username IP Address Port Database
The storage and memory can be overviewed as follows:
Server status will give you some rough overview of how the memory is allocated:
> db.serverStatus().mem { "bits" : 64, "resident" : 27, "virtual" : 397, "supported" : true } > db.serverStatus().tcmalloc ... not easy to read! ... var mem = db.serverStatus().tcmalloc; mem.tcmalloc.formattedString > db.serverStatus().tcmalloc.tcmalloc.formattedString "------------------------------------------------ MALLOC: 2763280920 ( 2635.3 MiB) Bytes in use by application MALLOC: + 949514240 ( 905.5 MiB) Bytes in page heap freelist MALLOC: + 135775296 ( 129.5 MiB) Bytes in central cache freelist MALLOC: + 2762912 ( 2.6 MiB) Bytes in transfer cache freelist MALLOC: + 42041608 ( 40.1 MiB) Bytes in thread cache freelists MALLOC: + 16965888 ( 16.2 MiB) Bytes in malloc metadata MALLOC: ------------ MALLOC: = 3910340864 ( 3729.2 MiB) Actual memory used (physical + swap) MALLOC: + 147517440 ( 140.7 MiB) Bytes released to OS (aka unmapped) MALLOC: ------------ MALLOC: = 4057858304 ( 3869.9 MiB) Virtual address space used MALLOC: MALLOC: 62521 Spans in use MALLOC: 78 Thread heaps in use MALLOC: 4096 Tcmalloc page size ------------------------------------------------ Call ReleaseFreeMemory() to release freelist memory to the OS (via madvise()). Bytes released to the OS take up virtual address space but no physical memory.
So you can see how much memory your database is storing.
The storage can be checked as follows:
> db.getCollectionNames().map(name => ({totalIndexSize: db.getCollection(name).stats().totalIndexSize, name: name})).sort((a, b) => a.totalIndexSize - b.totalIndexSize).forEach(printjson) { "totalIndexSize" : 69632, "name" : "dealers" } { "totalIndexSize" : 69632, "name" : "users" } { "totalIndexSize" : 73728, "name" : "authorization.permissions" } { "totalIndexSize" : 73728, "name" : "authorization.roles" } { "totalIndexSize" : 73728, "name" : "authorization.rules" } { "totalIndexSize" : 221184, "name" : "procedures" } { "totalIndexSize" : 372736, "name" : "reviews" } { "totalIndexSize" : 856064, "name" : "violations" } { "totalIndexSize" : 1617920, "name" : "incidents" } { "totalIndexSize" : 2031616, "name" : "files.files" } { "totalIndexSize" : 2732032, "name" : "rounds.requests" } { "totalIndexSize" : 23449600, "name" : "files.chunks" } { "totalIndexSize" : 114831360, "name" : "audit.trails" } { "totalIndexSize" : 368611328, "name" : "rounds" }
Of course you can investigate it further with:
> db.rounds.stats().indexSizes { "_id_" : 64692224, "rounds_unique" : 58056704, "rounds_sort_dealer_name" : 18669568, "rounds_sort_is_closed" : 17887232, "rounds_sort_video_id" : 18415616, "rounds_sort_started_at" : 55402496, "rounds_text_game_code" : 113971200, "rounds_sort_is_cancelled" : 21516288 } >
Get Storage per Collection
var collectionNames = db.getCollectionNames(), stats = []; collectionNames.forEach(function (n) { stats.push(db[n].stats()); }); for (var c in stats) { print(stats[c]['ns'] + ": " + stats[c]['size'] + " (" + stats[c]['storageSize'] + ")"); }
Get Storage per Collection in GB
var collectionNames = db.getCollectionNames() var col_stats = []; // Get stats for every collections collectionNames.forEach(function (n) { col_stats.push(db.getCollection(n).stats()); }); // Print for (var item of col_stats) { print(`${item['ns']} | size: ${item['size']} (${(item['size']/1073741824).toFixed(2)} GB) | storageSize: ${item['storageSize']} (${(item['storageSize']/1073741824).toFixed(2)} GB)`); }
Data in Mongo Database can easily become fragmented. Especially with frequent additions and removals. Let's check one example:
> db.files.chunks.stats() { "ns" : "incident.files.chunks", "size" : 148870322967, "count" : 588516, "avgObjSize" : 252958, "storageSize" : 394018263040, "capped" : false, "wiredTiger" : {....}
On our example, this collection “file.chunks” is 148870322967 bytes in data but it occupies 394018263040 on disk level. In other words the data is more than 100% fragmented.
In order to defremant a collection we have two options, depending on the storage engine:
Let's see how this is done in each of them:
If your storage engine is MMAPv1, this is your way forward. The repairDatabase command is used for checking and repairing errors and inconsistencies in your data. It performs a rewrite of your data, freeing up any unused disk space along with it. Like compact, it will block all other operations on your database. Running repairDatabase can take a lot of time depending on the amount of data in your db, and it will also completely remove any corrupted data it finds.
mongod --repair --repairpath /mnt/vol1
db.repairDatabase()
or
db.runCommand({repairDatabase:1})
RepairDatabase needs free space equivalent to the data in your database and an additional 2GB more. It can be run either from the system shell or from within the mongo shell. Depending on the amount of data you have, it may be necessary to assign a sperate volume for this using the –repairpath option.
he compact command works at the collection level, so each collection in your database will have to be compacted one by one. This completely rewrites the data and indexes to remove fragmentation. In addition, if your storage engine is WiredTiger, the compact command will also release unused disk space back to the system. You're out of luck if your storage engine is the older MMAPv1 though; it will still rewrite the collection, but it will not release the unused disk space. Running the compact command places a block on all other operations at the database level, so you have to plan for some downtime.
db.runCommand({compact:'collectionName'})
So let's check it in action with WiredTiger:
> db.files.chunks.storageSize() 394018263040 > db.runCommand({compact:'files.chunks'}) { "ok" : 1 } > db.files.chunks.storageSize() 162991509504 >
WoW, we saved more than half of the space :)
From 4.2 the: MMAPv1 is depricated, so if you are thinking of upgrading to 4.2, we have to change the storage engine. The engine isn't easy though. In nutshell we should:
The export can be done wherever you can, in our case we will create a new directory for that backup:
[root@localhost mongo]# mkdir -p /app/data/mongo [root@localhost mongo]# cd /app/data [root@localhost data]# ls -alrt total 0 drwxr-xr-x. 2 root root 6 Sep 10 15:20 mongo drwxr-xr-x. 3 root root 18 Sep 10 15:20 .. drwxr-xr-x. 3 root root 19 Sep 10 15:20 . [root@localhost data]# mkdir backup [root@localhost data]# cd backup/ [root@localhost data]# ls -alrt total 0 drwxr-xr-x. 2 root root 6 Sep 10 15:20 mongo <- Our new database folder with WIREDTIGER drwxr-xr-x. 3 root root 18 Sep 10 15:20 .. drwxr-xr-x. 4 root root 33 Sep 10 15:22 . drwxr-xr-x. 6 root root 62 Sep 10 15:25 backup <- The backup (export) location
The export is done very easy:
[root@localhost backup]# mongodump --out /app/data/backup -u adminDBA -p password123 --authenticationDatabase admin 2019-09-10T15:25:28.642-0400 writing admin.system.indexes to 2019-09-10T15:25:28.649-0400 done dumping admin.system.indexes (4 documents) 2019-09-10T15:25:28.649-0400 writing config.system.indexes to 2019-09-10T15:25:28.656-0400 done dumping config.system.indexes (2 documents) 2019-09-10T15:25:28.656-0400 writing admin.system.users to 2019-09-10T15:25:28.663-0400 done dumping admin.system.users (1 document) 2019-09-10T15:25:28.663-0400 writing admin.system.version to 2019-09-10T15:25:28.671-0400 done dumping admin.system.version (2 documents) 2019-09-10T15:25:28.671-0400 writing test.mycol1 to 2019-09-10T15:25:28.671-0400 writing ExampleDB.ExampleCol to 2019-09-10T15:25:28.691-0400 done dumping ExampleDB.ExampleCol (2 documents) 2019-09-10T15:25:28.691-0400 done dumping test.mycol1 (20 documents)
We have to modify the config file so it will point to the new folder and with the correct engine:
# Where and how to store data. storage: dbPath: /app/data/mongo <- The New folder journal: enabled: true engine: wiredTiger <- The New engine
P.S. Disable the authentication for now, since it will be problem later:
security: authorization: "disabled"
Feel free to restart the mongo however you want :)
[root@localhost mongo]# mongo -u adminDBA -p password123 localhost:27017/admin --eval "db.shutdownServer()" MongoDB shell version v4.0.12 connecting to: mongodb://localhost:27017/admin?gssapiServiceName=mongodb Implicit session: session { "id" : UUID("a3cdb4ff-b70b-4107-af33-c180dd186669") } MongoDB server version: 4.0.12 2019-09-10T15:30:00.110-0400 I NETWORK [js] DBClientConnection failed to receive message from localhost:27017 - HostUnreachable: Connection closed by peer server should be down... 2019-09-10T15:30:00.113-0400 I NETWORK [js] trying reconnect to localhost:27017 failed 2019-09-10T15:30:00.113-0400 I NETWORK [js] reconnect localhost:27017 failed failed 2019-09-10T15:30:00.113-0400 I QUERY [js] Failed to end session { id: UUID("a3cdb4ff-b70b-4107-af33-c180dd186669") } due to SocketException: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:27017, connection attempt failed: SocketException: Error connecting to localhost:27017 (127.0.0.1:27017) :: caused by :: Connection refused] [root@localhost mongo]# ps -ef | grep mongo root 6360 4437 0 15:30 pts/1 00:00:00 grep --color=auto mongo [root@localhost mongo]# /bin/mongod -f /etc/mongod.conf --fork 2019-09-10T15:32:49.796-0400 I STORAGE [main] Max cache overflow file size custom option: 0 about to fork child process, waiting until server is ready for connections. forked process: 6433 child process started successfully, parent exiting [root@localhost mongo]# [root@localhost mongo]# [root@localhost mongo]# mongo MongoDB shell version v4.0.12 connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb Implicit session: session { "id" : UUID("8f22cb0d-f851-4e78-9483-d58f479c4e4f") } MongoDB server version: 4.0.12 Server has startup warnings: 2019-09-10T15:32:50.560-0400 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2019-09-10T15:32:50.560-0400 I CONTROL [initandlisten] 2019-09-10T15:32:50.561-0400 I CONTROL [initandlisten] 2019-09-10T15:32:50.561-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2019-09-10T15:32:50.561-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2019-09-10T15:32:50.561-0400 I CONTROL [initandlisten] 2019-09-10T15:32:50.561-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2019-09-10T15:32:50.561-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2019-09-10T15:32:50.561-0400 I CONTROL [initandlisten] --- Enable MongoDB's free cloud-based monitoring service, which will then receive and display metrics about your deployment (disk utilization, CPU, operation statistics, etc). The monitoring data will be available on a MongoDB website with a unique URL accessible to you and anyone you share the URL with. MongoDB may use this information to make product improvements and to suggest MongoDB products and deployment options to you. To enable free monitoring, run the following command: db.enableFreeMonitoring() To permanently disable this reminder, run the following command: db.disableFreeMonitoring() --- > show dbs admin 0.000GB config 0.000GB local 0.000GB > bye
If the authentication isn't disabled, that will create a problem since in the config file the authentication is enabled, but the user (which is in the admin database in our case) cannot be used. So ensure the authentication is disabled:
[root@localhost mongo]# mongorestore /app/data/backup/ 2019-09-10T15:33:08.051-0400 preparing collections to restore from 2019-09-10T15:33:08.059-0400 reading metadata for test.mycol1 from /app/data/backup/test/mycol1.metadata.json 2019-09-10T15:33:08.060-0400 reading metadata for ExampleDB.ExampleCol from /app/data/backup/ExampleDB/ExampleCol.metadata.json 2019-09-10T15:33:08.087-0400 restoring test.mycol1 from /app/data/backup/test/mycol1.bson 2019-09-10T15:33:08.105-0400 no indexes to restore 2019-09-10T15:33:08.105-0400 finished restoring test.mycol1 (20 documents) 2019-09-10T15:33:08.123-0400 restoring ExampleDB.ExampleCol from /app/data/backup/ExampleDB/ExampleCol.bson 2019-09-10T15:33:08.125-0400 no indexes to restore 2019-09-10T15:33:08.125-0400 finished restoring ExampleDB.ExampleCol (2 documents) 2019-09-10T15:33:08.125-0400 restoring users from /app/data/backup/admin/system.users.bson 2019-09-10T15:33:08.185-0400 done
We can verify the data, for some reason mongo was showing me 0000, even though there was data in the databases:
[root@localhost mongo]# mongo MongoDB shell version v4.0.12 connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb Implicit session: session { "id" : UUID("dd544d99-2d70-4c83-8f77-487f9eac0beb") } MongoDB server version: 4.0.12 Server has startup warnings: 2019-09-10T15:32:50.560-0400 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2019-09-10T15:32:50.560-0400 I CONTROL [initandlisten] 2019-09-10T15:32:50.561-0400 I CONTROL [initandlisten] 2019-09-10T15:32:50.561-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2019-09-10T15:32:50.561-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2019-09-10T15:32:50.561-0400 I CONTROL [initandlisten] 2019-09-10T15:32:50.561-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2019-09-10T15:32:50.561-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2019-09-10T15:32:50.561-0400 I CONTROL [initandlisten] --- Enable MongoDB's free cloud-based monitoring service, which will then receive and display metrics about your deployment (disk utilization, CPU, operation statistics, etc). The monitoring data will be available on a MongoDB website with a unique URL accessible to you and anyone you share the URL with. MongoDB may use this information to make product improvements and to suggest MongoDB products and deployment options to you. To enable free monitoring, run the following command: db.enableFreeMonitoring() To permanently disable this reminder, run the following command: db.disableFreeMonitoring() --- > show dbs ExampleDB 0.000GB ?? admin 0.000GB ?? config 0.000GB ?? local 0.000GB ?? test 0.000GB ?? > use test switched to db test > show collections mycol1 > db.mycol1.find() { "_id" : ObjectId("5d00f49466f8aff37eced4b5"), "count" : 1, "username" : "user1", "password" : "qeg3amrt3xr", "createdOn" : ISODate("2019-06-12T12:48:20.255Z"), "score" : 0 } { "_id" : ObjectId("5d00f49466f8aff37eced4b6"), "count" : 2, "username" : "user2", "password" : "jxucw3wstt9", "createdOn" : ISODate("2019-06-12T12:48:20.257Z"), "score" : 0 } { "_id" : ObjectId("5d00f49466f8aff37eced4b7"), "count" : 3, "username" : "user3", "password" : "7r5u24ndn29", "createdOn" : ISODate("2019-06-12T12:48:20.257Z"), "score" : 0 } { "_id" : ObjectId("5d00f49466f8aff37eced4b8"), "count" : 4, "username" : "user4", "password" : "8o1s4qjjor", "createdOn" : ISODate("2019-06-12T12:48:20.258Z"), "score" : 0 } { "_id" : ObjectId("5d00f49466f8aff37eced4b9"), "count" : 5, "username" : "user5", "password" : "fyzxovtpgb9", "createdOn" : ISODate("2019-06-12T12:48:20.259Z"), "score" : 0 } { "_id" : ObjectId("5d00f49466f8aff37eced4ba"), "count" : 6, "username" : "user6", "password" : "k5agz7u8fr", "createdOn" : ISODate("2019-06-12T12:48:20.259Z"), "score" : 0 } { "_id" : ObjectId("5d00f49466f8aff37eced4bb"), "count" : 7, "username" : "user7", "password" : "5pzn3njnhfr", "createdOn" : ISODate("2019-06-12T12:48:20.260Z"), "score" : 0 } { "_id" : ObjectId("5d00f49466f8aff37eced4bc"), "count" : 8, "username" : "user8", "password" : "1gxa5szia4i", "createdOn" : ISODate("2019-06-12T12:48:20.261Z"), "score" : 0 } { "_id" : ObjectId("5d00f49466f8aff37eced4bd"), "count" : 9, "username" : "user9", "password" : "7d6ceyojemi", "createdOn" : ISODate("2019-06-12T12:48:20.263Z"), "score" : 0 } { "_id" : ObjectId("5d00f49466f8aff37eced4be"), "count" : 10, "username" : "user10", "password" : "29ix622zkt9", "createdOn" : ISODate("2019-06-12T12:48:20.263Z"), "score" : 0 } { "_id" : ObjectId("5d00f61c4422539b4d675395"), "count" : 1, "username" : "user1", "password" : "8uhzykqpvi", "createdOn" : ISODate("2019-06-12T12:54:52.374Z"), "score" : 0 } { "_id" : ObjectId("5d00f61c4422539b4d675396"), "count" : 2, "username" : "user2", "password" : "zm7v1zcl3di", "createdOn" : ISODate("2019-06-12T12:54:52.374Z"), "score" : 0 } { "_id" : ObjectId("5d00f61c4422539b4d675397"), "count" : 3, "username" : "user3", "password" : "lk3hn597ldi", "createdOn" : ISODate("2019-06-12T12:54:52.375Z"), "score" : 0 } { "_id" : ObjectId("5d00f61c4422539b4d675398"), "count" : 4, "username" : "user4", "password" : "6akj6mvx6r", "createdOn" : ISODate("2019-06-12T12:54:52.375Z"), "score" : 0 } { "_id" : ObjectId("5d00f61c4422539b4d675399"), "count" : 5, "username" : "user5", "password" : "qvj2nyx2yb9", "createdOn" : ISODate("2019-06-12T12:54:52.375Z"), "score" : 0 } { "_id" : ObjectId("5d00f61c4422539b4d67539a"), "count" : 6, "username" : "user6", "password" : "5o9w6zuxr", "createdOn" : ISODate("2019-06-12T12:54:52.376Z"), "score" : 0 } { "_id" : ObjectId("5d00f61c4422539b4d67539b"), "count" : 7, "username" : "user7", "password" : "z2if1rlik9", "createdOn" : ISODate("2019-06-12T12:54:52.377Z"), "score" : 0 } { "_id" : ObjectId("5d00f61c4422539b4d67539c"), "count" : 8, "username" : "user8", "password" : "uyrhvqxs9k9", "createdOn" : ISODate("2019-06-12T12:54:52.377Z"), "score" : 0 } { "_id" : ObjectId("5d00f61c4422539b4d67539d"), "count" : 9, "username" : "user9", "password" : "u2ff0kvs4i", "createdOn" : ISODate("2019-06-12T12:54:52.378Z"), "score" : 0 } { "_id" : ObjectId("5d00f61c4422539b4d67539e"), "count" : 10, "username" : "user10", "password" : "a7ny85xw29", "createdOn" : ISODate("2019-06-12T12:54:52.379Z"), "score" : 0 }
As any database, mongo has objects to store information into, we already discussed them:
As you know, a collection can store many documents, which doesn't even need to have the same structure. Imagine a collection, just a laundry bin, in which you put a lot of shirts. These shirts, don't even have to be the same size or colour. Since Mongo however can store pretty big objects, it is good to have index on such objects. So let's check how to create indexes:
Indexes support the efficient execution of queries in MongoDB. Without indexes, MongoDB must perform a collection scan, i.e. scan every document in a collection, to select those documents that match the query statement. If an appropriate index exists for a query, MongoDB can use the index to limit the number of documents it must inspect.
Indexes are special data structures [1] that store a small portion of the collection’s data set in an easy to traverse form. The index stores the value of a specific field or set of fields, ordered by the value of the field. The ordering of the index entries supports efficient equality matches and range-based query operations. In addition, MongoDB can return sorted results by using the ordering in the index.
The following diagram illustrates a query that selects and orders the matching documents using an index:
In addition to the MongoDB-defined _id index, MongoDB supports the creation of user-defined ascending/descending indexes on a single field of a document.
MongoDB also supports user-defined indexes on multiple fields, i.e. compound indexes.
The order of fields listed in a compound index has significance. For instance, if a compound index consists of { userid: 1, score: -1}, the index sorts first by userid and then, within each userid value, sorts by score. For compound indexes and sort operations, the sort order (i.e. ascending or descending) of the index keys can determine whether the index can support a sort operation. See Sort Order for more information on the impact of index order on results in compound indexes.
MongoDB uses multikey indexes to index the content stored in arrays. If you index a field that holds an array value, MongoDB creates separate index entries for every element of the array. These multikey indexes allow queries to select documents that contain arrays by matching on element or elements of the arrays. MongoDB automatically determines whether to create a multikey index if the indexed field contains an array value; you do not need to explicitly specify the multikey type.
To support efficient queries of geospatial coordinate data, MongoDB provides two special indexes: 2d indexes that uses planar geometry when returning results and 2dsphere indexes that use spherical geometry to return results.
MongoDB provides a text index type that supports searching for string content in a collection. These text indexes do not store language-specific stop words (e.g. “the”, “a”, “or”) and stem the words in a collection to only store root words.
To support hash based sharding, MongoDB provides a hashed index type, which indexes the hash of the value of a field. These indexes have a more random distribution of values along their range, but only support equality matches and cannot support range-based queries.
The unique property for an index causes MongoDB to reject duplicate values for the indexed field. Other than the unique constraint, unique indexes are functionally interchangeable with other MongoDB indexes.
New in version 3.2.
Partial indexes only index the documents in a collection that meet a specified filter expression. By indexing a subset of the documents in a collection, partial indexes have lower storage requirements and reduced performance costs for index creation and maintenance.
The sparse property of an index ensures that the index only contain entries for documents that have the indexed field. The index skips documents that do not have the indexed field.
You can combine the sparse index option with the unique index option to prevent inserting documents that have duplicate values for the indexed field(s) and skip indexing documents that lack the indexed field(s).
TTL indexes are special indexes that MongoDB can use to automatically remove documents from a collection after a certain amount of time. This is ideal for certain types of information like machine generated event data, logs, and session information that only need to persist in a database for a finite amount of time.
New in version 4.4.
Hidden indexes are not visible to the query planner and cannot be used to support a query.
By hiding an index from the planner, users can evaluate the potential impact of dropping an index without actually dropping the index. If the impact is negative, the user can unhide the index instead of having to recreate a dropped index. And because indexes are fully maintained while hidden, the indexes are immediately available for use once unhidden.
Except for the _id index, you can hide any indexes.
Let's see how to get what indexes a collection has and how to create or re-create and index:
To get the indexes for a certain collection we can use the following function in any collection:
Get Indexes
> db.files.files.getIndexes() [ { "v" : 2, "key" : { "_id" : 1 }, "name" : "_id_" }, { "v" : 2, "unique" : true, "key" : { "filename" : 1, "uploadDate" : 1 }, "name" : "files_files_unique" } ] >
To create index we can use:
Create Index
db.files.chunks.createIndex( { "v" : 2 }, { "unique" : true, "key" : { "files_id" : 1, "n" : 1 } } )
Reindex will re-create all indexes on a collection (after mongoimport for example)
Re-Index
> db.files.files.reIndex() { "nIndexesWas" : 2, "nIndexes" : 2, "indexes" : [ { "v" : 2, "key" : { "_id" : 1 }, "name" : "_id_" }, { "v" : 2, "unique" : true, "key" : { "filename" : 1, "uploadDate" : 1 }, "name" : "files_files_unique" } ], "ok" : 1 } >
If you have a JSON file “js” which you would like to execute on the mongo, you can do as follows:
[root@tbp-mts-mdb02 ~]# /usr/bin/mongo -u root -p "password" --authenticationDatabase=admin --host localhost < script.js MongoDB shell version v4.2.2 connecting to: mongodb://localhost:27017/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("dc5d5def-a7f9-4237-9988-934944d4dedf") } MongoDB server version: 4.2.2 switched to db incident { "nIndexesWas" : 2, "ok" : 1 } { "nIndexesWas" : 8, "ok" : 1 } { "nIndexesWas" : 7, "ok" : 1 }