High availability deployment (Enterprise Edition)


While gathering analytics data is not as critical as gathering a customer information data and keeping it safe, we must make sure that all data is replicated, and recovered in case of a failure occurs. Major advantages of replica sets are business continuity through high availability, data safety through data redundancy, and read scalability through load sharing (reads).

With replica sets, MongoDB language drivers know the current primary. All write operations go to the primary. If the primary is down, the drivers know how to get to the new primary (an elected new primary), this is auto failover for high availability. The data is replicated after writing. Drivers always write to the replica set's primary (called the master), the master then replicates to slaves. The primary is not fixed – the master/primary is nominated.

Typically you need 3 MongoDB instances in a replica set on different server machines. You can add more replicas of the primary if you like for read scalability, but you only need 3 for high availability failover. If there are three instances and one goes down, the load for the remaining instances only go up by 50% (which is the preferred situation).

Network diagram

Below you can see network diagram for this set. There is a load balancer, which is typically a hardware load balancer deployed internally inside the enterprise, therefore this document won’t go into details for this. This load balancer simply distributes traffic between two Node.js servers.

Behind the load balancer, there are two Countly instances, which act as application servers. They accept connections from Countly SDK and then stores data to the MongoDB replica set.

Server specifications

Server specifications stated below are minimum recommended specifications for Enterprise Edition, and might need to be adjusted based on your traffic.

Node 1 and Node 2 servers are Countly servers with the following configuration:

  • 2 CPU cores
  • 4 GB RAM
  • 20 GB root (boot) disk
  • 80 and/or 443 ports (http and https) ports (and 25 for mail) need to be open to the load balancer (in/out)

MongoDB replica set (primary and secondary) have the following configuration:

  • 1+ CPU core(s)
  • 8 GB RAM
  • 20 GB root (boot) disk
  • 100 GB SSD disk attached (mounted) to /data (/data directory should be owned by mongodb user: (chown -R mongodb:mongodb /data))
  • Port 27017 needs to be open to all instances (in/out)

A MongoDB arbiter server is a simple server with a configuration as follows:

  • 1 CPU core
  • 2 GB RAM
  • 20 GB root (boot) disk
  • Port 27017 needs to be open to all instances (in/out)

MongoDB replica set installation

As a prerequisite you need to install MongoDB 3.2 on all three MongoDB instances (primary, secondary and arbiter). For installation on RedHat follow this guide and for Ubuntu follow this guide. You’ll need an operating system user with sudo privileges for installation, to perform configurations stated below and to restart mongod service later on.

1. Modify MongoDB storage engine

Countly recommends using MMAP storage engine instead of the default Wired Tiger storage engine that comes with MongoDB 3 series. In order to change storage engine, edit /etc/mongod.conf file on all three MongoDB instances and change engine directive to look like below;

    engine: mmapv1

2. Bind IP

By default MongoDB configuration sets the bindIP directive to localhost, meaning that mongod service will not listen to any connections other than the ones initiated from localhost. Edit /etc/mongod.conf file on all three MongoDB instances and comment out or remove bindIP directive.

    port: 27017
    # bindIp:

Please follow our Securing MongoDB Guide to secure your MongoDB instances. Removing bindIP your MongoDB instances are open to all to prevent connections from the Internet unless you have necessary firewall configurations in place to that.

3. Configuring the MongoDB arbiter

Edit /etc/mongod.conf file on the MongoDB arbiter (these changes should be done only on the arbiter server) server to disable journalling and enabling small files;

    enabled: false
    smallFiles: true

These changes are made because arbiter server is not a data bearing node and is present just for primary election purposes. You can read more about MongoDB arbiters here.

4. Final configurations and running

Edit /etc/mongod.conf file on all three MongoDB instances and add the below replSetName directive under replication section, our replica set name will be rs0 (you can define anything you want here).

   replSetName: rs0

Then, restart MongoDB on all 3 instances.

sudo service mongod restart

5. Edit hostnames

Before initiating the replica set, make sure that all three MongoDB instances can access each other on port 27017 and /etc/hosts file in each of the three machines have a configuration such as below.    mongodb01.yourdomain.com mongodb01

This is to ensure that when replica set is initiated each server will be able to access the others in the replica set.

6. Initiate the replica set

In the terminal of primary or secondary MongoDB instance, connect to MongoDB (type in your shell):


And execute the below command to initiate the replica set.

   _id: "rs0",
   members: [
      { _id: 0, host: "mongodb01.yourdomain.com:27017" },
      { _id: 1, host: "mongodb02.yourdomain.com:27017" },
      { _id: 2, host: "mongodb03.yourdomain.com:27017", arbiterOnly: true }

Note that in members array, the server that is dedicated to be the arbiter needs to have arbiterOnly: true field present.

After initiating the replica set, if you open the mongo shell (mongo command) in all three instances, each one will be marked with their role, as in PRIMARY, SECONDARY or ARBITER.

Countly Enterprise Edition installation

On Node 1 and Node 2, you'll need to install two Countly servers as instructed here and then make MongoDB configurations as stated here.

Getting health checks

If you need a health check for Countly, you can use https://URL/ping URL endpoint. It will return "success" if Countly server is running on that URL.

If you are using an offline, full Enterprise Edition installation package then follow the instructions below:

  • Upload the installation package to Node 1 and Node 2
  • Extract the archive
  • Run bash countly/bin/offline_installer.sh

In either online or offline installation modes, you need to run the installation script as an operating system user that has sudo privileges.

Was this article helpful?
0 out of 0 found this helpful

Looking for help?