Error after upgrading Countly to 18.04.1 CE


We are in a bind and hoped the Countly Community could help us solve the issue..
We recently upgraded to Countly 18.04.1 Community Edition and we are receiving the following error regarding any new applications that attempt to log data in Countly.

MongoError: too many namespaces/collections {"name":"MongoError","message":"too many namespaces/collections","ok":0,"errmsg":"too many namespaces/collections","code":10081}

This is further down in the server log:
Could not save crash { MongoError: too many namespaces/collections
at Function.MongoError.create (/home/ubuntu/countly/node_modules/mongodb-core/lib/error.js:31:11)
at toError (/home/ubuntu/countly/node_modules/mongodb/lib/utils.js:139:22)
at /home/ubuntu/countly/node_modules/mongodb/lib/collection.js:669:23
at handleCallback (/home/ubuntu/countly/node_modules/mongodb/lib/utils.js:120:56)
at resultHandler (/home/ubuntu/countly/node_modules/mongodb/lib/bulk/ordered.js:421:14)
at /home/ubuntu/countly/node_modules/mongodb-core/lib/connection/pool.js:469:18
at _combinedTickCallback (internal/process/next_tick.js:73:7)
at process._tickCallback (internal/process/next_tick.js:104:9)
name: 'MongoError',
message: 'too many namespaces/collections',
driver: true,
code: 10081,
index: 0,
errmsg: 'too many namespaces/collections',
getOperation: [Function],
toJSON: [Function],
toString: [Function] }

Could someone please shed some light on what might be happening or offer guidance on a fix?



  • Hello,
    So basically you have too many collections in database (basically too many tables).

    And you have 2 options:

    1. delete not needed tables (like the most usual problem here is push, if you use it a lot, it creates lots of collections, you can upgrade to new push in 18.08.1, which would not create those collections and then delete them by running nodejs countly/bin/upgrade/18.01.1/scripts/push_clear.js)
      Maybe it is some other collections, you can check it by running
      mongo countly
      show collections

    2. You are using mmapv2 storage engine in MongoDB, you can migrate to WiredTiger which does not have such limit, that is mostly done by exporting db, changing MongoDB config file, restarting Mongod and importing db

    Let us know, if you need further assistance

    Comment actions Permalink
  • Thank you for the assistance.
    If we upgrade to 18.08.1 do we also upgrade mongo to 3.4 and 3.6 ?

    Additionally I believe we are running mmapv1. Is that a possibility? I believe that is what is in our mongod.conf file. Is there a way to check what version of mmapv we are running?


    Comment actions Permalink
  • If this is indeed due to push collection, you can just simply run
    nodejs countly/bin/upgrade/18.01.1/scripts/push_clear.js

    And it would clear them, but they would pile up again eventually. That is why upgrading to 18.08.1 is recommended.

    If you upgrade to 18.08.1, it should work with MongoDB 3.2, 3.4, and 3.6, but we encourage to upgrade to latest 3.6 version, because in future we might introduce some 3.6 version specific feature usage for optimization purposes.

    Checking storage engine can be done like this:


    But it would not matter what mmap version you have, as only WiredTiger supports unlimited collection amount, but it also requires a bit more resources.

    Comment actions Permalink
  • Hello,

    My company is concerned that the push_clear.js would remove pertinent data from the database..
    Is that not the case? They are just concerned that we would lose customer data and I need to assure them this is not the case by performing the push_clear.js function.

    Do you know the resource requirements for WiredTiger? We are currently running Countly with a 2 vCPU / 8 GB RAM..

    Again thank you for your prompt assistance.


    Comment actions Permalink
  • push_clear.js shouldn't do that, however we cannot guarantee anything here, as you are on Community Edition and not Enterprise Edition with an SLA. Please run anything on your own risk that is written in this community help forum.

    If you are going to switch to WiredTiger, then there will be a slight increase in the requirements. We do not know your current server / CPU load and current MongoDB usage hence we cannot come up with a suggestion about CPU / RAM requirements if / when you switch to WiredTiger.

    Comment actions Permalink
  • Hello,

    This was done using a replica instance, so our production instance remains unchanged and I can re-run the update on another replica..

    I attempted the upgrade of Countly and ran the push_clear.js script.

    But new errors have appeared in our Countly server logs:

    2018-10-10T15:42:48.577Z: ERROR [jobs:scanner] Error when loading job /home/ubuntu/countly/api/parts/jobs/../../../plugins/push/api/jobs/process.js: {} Error: The module '/home/ubuntu/countly/plugins/push/api/parts/apn/build/Release/apns.node' was compiled against a different Node.js version using NODE_MODULE_VERSION 48. This version of Node.js requires NODE_MODULE_VERSION 57. Please try re-compiling or re-installing the module (for instance, using npm rebuild or npm install).

    This error seems to reference the upgrade of NodeJS during the 18.08.1 upgrade.
    I am somewhat unfamiliar with Countly. Are these modules provided by Countly or were they provided by a third party?

    And this error keeps repeating at the bottom of the log.

    2018-10-10T17:30:00.489Z: INFO [jobs:manager] Trying to start job {"_id":"599461e6a398ec38c6ac0984","name":"assistant:generate","created":1539183952141,"status":0,"started":1539190800155,"finished":1539190840068,"duration":39915,"schedule":"every 30 minutes starting on the 0 min","next":1539192600000,"modified":1539190840070,"size":null,"done":null,"bookmark":null,"error":null}

    Can you help explain these errors and how to rectify them?


    Comment actions Permalink
  • So push_clear.js deletes push collections of created messages, which contain to which user to send. It would not delete any other data than that.

    And you need to run it on DB server, on primary only is enough, as secondary will sync with primary.

    About Nodejs errors, so yes, some modules are precompiled to specific nodejs version and they are compiled at install/upgrade time.

    So upgrade script was supposed to

    1. install new node js version
    2. delete old dependencies folder
    3. install new dependencies

    This all is done here:

    For some reason it was not done on your server

    And the latter error, is not an error at all. It is a log statement, so you probably have enabled info level debug log for jobs or for whole server.

    Comment actions Permalink
  • After running the upgrade again I checked the dashboard log and saw this error again:

    too many namespaces/collections

    Does this mean that we have to try option 2 and switch the engine to WiredTiger?

    2018-10-10T18:25:23.563Z: ERROR [db:write] Error writing auth_tokens {"name":"insert","args":[{"_id":"12d4cfc7432f0e7814b048025061d751c2a9ae24","ttl":1800000,"ends":1540995923,"multi":true,"owner":"582c7fefa0c3103e00c32c1c","app":"","endpoint":"","purpose":"LoggedInAuth"},null]} WriteError({"code":10081,"index":0,"errmsg":"too many namespaces/collections","op":{"_id":"12d4cfc7432f0e7814b048025061d751c2a9ae24","ttl":1800000,"ends":1540995923,"multi":true,"owner":"582c7fefa0c3103e00c32c1c","app":"","endpoint":"","purpose":"LoggedInAuth"}}) {"code":10081,"index":0,"errmsg":"too many namespaces/collections","op":{"_id":"12d4cfc7432f0e7814b048025061d751c2a9ae24","ttl":1800000,"ends":1540995923,"multi":true,"owner":"582c7fefa0c3103e00c32c1c","app":"","endpoint":"","purpose":"LoggedInAuth"}}
    { BulkWriteError: too many namespaces/collections
    at resultHandler (/home/ubuntu/countly/node_modules/mongodb/lib/bulk/ordered.js:464:11)
    at /home/ubuntu/countly/node_modules/mongodb-core/lib/connection/pool.js:531:18
    at _combinedTickCallback (internal/process/next_tick.js:132:7)
    at process._tickCallback (internal/process/next_tick.js:181:9)
    name: 'BulkWriteError',
    message: 'too many namespaces/collections',
    driver: true,
    code: 10081,
    index: 0,
    errmsg: 'too many namespaces/collections',
    getOperation: [Function],
    toJSON: [Function],
    toString: [Function],
    BulkWriteResult {
    ok: [Getter],
    nInserted: [Getter],
    nUpserted: [Getter],
    nMatched: [Getter],
    nModified: [Getter],
    nRemoved: [Getter],
    getInsertedIds: [Function],
    getUpsertedIds: [Function],
    getUpsertedIdAt: [Function],
    getRawResponse: [Function],
    hasWriteErrors: [Function],
    getWriteErrorCount: [Function],
    getWriteErrorAt: [Function],
    getWriteErrors: [Function],
    getLastOp: [Function],
    getWriteConcernError: [Function],
    toJSON: [Function],
    toString: [Function],
    isOk: [Function] },
    [Symbol(mongoErrorContextSymbol)]: {} }

    Comment actions Permalink
  • you would need to clear push collections first and then upgrade.
    If that is indeed push collections that uses up all limit.

    Comment actions Permalink
  • Well - I tried the WiredTiger engine replacement..

    I got very far but then on the mongo restore I got this error:
    Failed: countly.plugins: error creating collection countly.plugins: error running create command: collection already exists

    I am at a loss as to what to do next?

    Comment actions Permalink
  • Sorry, but I need more information
    Is it during mongorestore phase?
    If yes, then it can only happen if you already tried restore once before, did you?
    If not, then it seems you still have the old data there, which means you did not follow the guide, as you must start db with new engine on new empty folder, which means either delete old data after exporting it, or create new folder/path in MongoDB config

    Comment actions Permalink
  • I think I did the wrong arguments for mongodump, I dumped all the databases instead of following the instructions here:

    I was able to take a snapshot of our production countly instance again and went through the steps where I detached the old "mongodb" and attached a new XFS "mongodb" drive and was successful dumping and importing the database into WiredTiger and starting Countly.

    For now I think we are good to go. I appreciate all the assistance and help.

    Comment actions Permalink

Please sign in to leave a comment.