I am looking to save data on a MonogoDB. For this purpose, I have a setup with one mongos, three configserver, and a sharding configuration consisting of two replicasets (each with primary/secondary/arbiter).
There are two main issues I am facing:
1) What is the most effective method to track the time taken to store all the data?
Currently, I am using the command "time mongo 141.100.55.72:25001/mydb --quiet /data/javascript/insert.js" through the mongos (details of insert.js provided later). I want to measure multiple operations and save each output in a file. I am working on an Ubuntu server.
2) The time taken for the operation is significantly slow - for example, it takes about 5 minutes to store 100,000 data objects. How can I improve the performance?
The javascript script is as follows: (testdata creation with a sharding key for testing purposes could be causing the slow performance)
var amount = 100000/4;
var x=1;
var doc= "";
for (i=0; i<amount; i++)
{
doc = { datetime: '1119528044', att2: '...', key: x,...} //14 attributes
db.mycol.insert(doc);
x=x + 1
}
for (i=0; i<amount; i++)
{
doc = { datetime: '1219268044', att2: '...', key: x,...} //14 attributes
db.mycol.insert(doc);
x=x + 1
}
for (i=0; i<amount; i++)
{
doc ={ datetime: '1355851700', att2: '...', key: x,...} //14 attributes
db.mycol.insert(doc);
x=x + 1
}
for (i=0; i<amount; i++)
{
doc = { datetime: '1444851704', att2: '...' key: x,...} //14 attributes
db.mycol.insert(doc);
x=x + 1
}
Additionally, I need guidance on how to verify if the data is present on both replicasets. What would be the most effective approach to perform this check?
Thank you,