Docker and Kubernetes Overview



Starting from version 19.08.1, Countly fully supports Docker-based setups in production. Countly may run on:

The first method, meaning "Plain Docker", bundles together both Countly processes, Nginx, and MongoDB. It's made for evaluation purposes and, for obvious reasons, it is not meant to be used in production.


There are 2 Countly images that remain when you take the evaluation image apart:

  • countly/api (Dockerfile-api) - API process responsible for handling data requests which should be scaled accordingly.
  • countly/frontend (Dockerfile-frontend) - Frontend (dashboard) process, 1 container is enough for almost any use case.

Countly Enterprise

We also provide Countly Enterprise versions of the Docker images above to our customers.

Both images may be fully configured through environment variables. Both require no storage and expose only one HTTP port: api exposes port 3001, while frontend exposes port 6001.

Since there are 2 images, running 2 containers is not enough for a full setup - you will need a way to forward requests coming to the Countly host to either the api or frontend. For a standard non-Docker setup, we use Nginx to complete this task. In cases where you use docker-compose or docker stack, we have you covered with pre-configured Nginx containers. For Kubernetes case, we have a reference Ingressimplementation.

Of course, you will still need to have the MongoDB server running. Just like with Nginx, we've made reference implementations: a simple one-node setup for docker-compose / docker stack and a Replica Set setup for Kubernetes.


Countly containers may be configured in 3 ways:

  1. By specifying the configuration options in theapi/config.js and frontend/config.jsfiles of the containers themselves.
  2. By passing environment variables to the corresponding container.
  3. A combination of both methods listed above.

For details regarding the first configuration option, please refer to Configuring Countly. In the Docker case, you may easily modify the configuration options in a running container and commit it as a new image.

Environment variables override whatever is set in config.jsfiles. Each config.jshas a separate env variable prefix: COUNTLY_CONFIG_API_ for the api container and COUNTLY_CONFIG_FRONTEND_ for the frontend, yet it's possible to use the universal prefix COUNTLY_CONFIG__ (preferred, note the double underscores in the end). The actual configuration option name comes right after the corresponding prefix. The rule is simple: open the config.jsfile you need. For example, api/config.js:

var countlyConfig = {
  mongodb: {
    host: "localhost",
    db: "countly",
    port: 27017,
    max_pool_size: 500
  api: {
    port: 3001,
    host: "localhost",
    max_sockets: 1024,
    timeout: 120000

... then write any variable path in CAPS using _ as a delimiter. For example, assuming we would like to decrease the max_pool_size from 500 to 200, we would use COUNTLY_CONFIG_API_MONGODB_MAX_POOL_SIZE=200. Or, change the database name: COUNTLY_CONFIG_API_MONGODB_DB="test_db". Values are read as JSON, so in case you need to specify an array, it would appear as COUNTLY_CONFIG_API_MONGODB_REPLSETSERVERS="[\"mongo-1:27017\", \"mongo-2:27017\"]".

Again, please apply all COUNTLY_CONFIG_API_ env variables (unless you're sure they're not needed) to the frontend container, and vice versa, when not using the universal prefix COUNTLY_CONFIG__. The bare minimum for all setups (for both: API and Frontend) would appear as follows:

    value: "mobile,web,desktop,some,more,plugins"
    value: "gridfs"
    value: "mongodb://"
value: "--max-old-space-size=2048"

In case you would like to configure each config.jsseparately, the same config would appear as follows:

    value: "mobile,web,desktop,some,more,plugins"
    value: "gridfs"
    value: "mongodb://"
    value: "mongodb://"
value: "--max-old-space-size=2048"


Above we used MongoDB configuration options only as an example of overriding config.js. While one environment variable per configuration option way of configuring MongoDB is perfectly valid, the preferred way is to specify MongoDB URI in a single COUNTLY_CONFIG__MONGODB variable, same for both containers. Please refer to MongoDB URI guide for all connection options. Usually you'd need to specify all your replica set members, database name, authentication details & maximum connection pool size:

value: "mongodb://USER:PASSWORD@host1,host2,host3/countly?authSource=admin&replicaSet=rs1&maxPoolSize=500"


Countly is a Node.js-based application, therefore it uses standard Node.js clustering in order to utilize multiple processor cores. Non-Docker setups create a separate API worker process for each CPU core available in the system by default. Docker setups alter that behavior by running a single API worker process per container. Depending on Docker runtime, that might not be optimal from performance perspective, therefore you might need to change this environment variable to a value greater than default "1": 

    value: "4"


In a non-Docker environment, Countly ships with Sendmail to send emails. As it would be a substantial overhead to have a Sendmail server in each container, we decided to remove it and fall back to a simple SMTP mailer instead. In order to configure it, you will need to specify a set of environment variables:

  value: ""
  value: 25
  value: "example-user"
  value: "example-password"

COUNTLY_CONFIG__MAIL_TRANSPORT is a Node.js module you can use (only the SMTP one is usable in Docker). All the COUNTLY_CONFIG__MAIL_CONFIG options form the configobject passed to the nodemailer module. In the example above, it would be {host: '', port: 25, auth: {user: 'example-user', pass: 'example-password'}}. In case you need a more precise `nodemailer` configuration, you may pass an arbitrary JSON string to the COUNTLY_CONFIG__MAIL_CONFIG environment variable. For example, in the case where you wouldn’t want TLS and authentication, yet your mail server goes `STARTTLS`, path the following:

  value: "nodemailer-smtp-transport"
value: '{"host": "…", “port": 25, "ignoreTLS": true}'

This approach supports any configuration supported by nodemailer-smtp-transport. For example, you may use service when configuring Postmark (Postmark also requires you to override the "FROM" address with an existing signature):

  value: "nodemailer-smtp-transport"
  value: "Postmark"
  value: "99999999-8888-7777-6666-555555555555"
  value: "99999999-8888-7777-6666-555555555555"
  value: "Countly"

In the event that SMTP is not enough for your specific case or you would like to customize email templates, please refer to Using a 3rd party email server. For Docker, it would require modifying our Docker images with customized versions of mail.js.


In some cases, such as with email reports, the Countly API needs to access the Frontend, or vice versa. To make it possible, please set the COUNTLY_CONFIG_HOSTNAME variable with your planned Countly hostname.


Choosing Plugins at Runtime

For Docker-based installations, it's impossible to change the list of plugins at runtime. In case you need to enable or disable some plugins, you will have to drop the existing containers and start new ones with a new COUNTLY_PLUGINS environment variable value.

The most important configuration option for both docker images is the COUNTLY_PLUGINSenvironment variable. It's a plain, comma-separated list of plugin names, and it contains a list of plugins Countly should use. All plugins are enabled by default via use ofCOUNTLY_PLUGINS build argument.

Our images come with all the available plugins prebuilt, meaning they are ready to use from a dependencies point of view. Yet the frontend needs the actual plugin list to build production client-side JS/CSS files. Also, some of the plugins require a running MongoDB database to be correctly installed. Therefore, it cannot be done at the image build phase - your database is required.

Both images finalize the installation of the plugins at the first container boot. Note, due to this installation finalization phase, the very first container launch usually takes 1-2 minutes. All consecutive launches will be much faster.

In case you want to use custom or your plugins in Countly, you will need to pre-build each image and commit it as your own image to the container registry of your choice. You will need to pre-install the plugins and re-minify all frontend files again:

export CLY_VERSION=20.04
export CLY_DOCKER_HUB_USER=$(whoami)

docker run --name countly-api-prebuild \
	-e \
	-e COUNTLY_PLUGINS=mobile,crashes,push \
docker commit countly-api-prebuild "${CLY_DOCKER_HUB_USER}/countly-api:${CLY_VERSION}"
docker push "${CLY_DOCKER_HUB_USER}/countly-api:${CLY_VERSION}"

docker run --name cly-frontend-prebuild \
	-e \
	-e COUNTLY_PLUGINS=mobile,crashes,push \
docker commit countly-frontend-prebuild "${CLY_DOCKER_HUB_USER}/countly-api:${CLY_VERSION}"
docker push "${CLY_DOCKER_HUB_USER}/countly-frontend:${CLY_VERSION}"



Currently, recommended memory limit is 600 MB + 500 MB per CPU core given to API container and 200 MB for Frontend container. Minimal limits just to make it boot are 400 MB for API and 100 MB for Frontend.


CPU usage solely depends on your use case; general rule is to give APIs as much as they need, while keeping Frontend limits low, i.e. 100 mCPU.



Upgrading Docker-based Countly installations is slightly different from standard procedures. For obvious reasons, you will not need to download new sources, but the MongoDB version upgrades (if any) and data transformations will still need to be made. Therefore, upgrading procedures would appear as follows:

  1. Shut down the Countly containers.
  2. Go to Upgrading Countly server and check if any special procedures (like Nginx reconfiguration or MongoDB upgrade) must be made for the version you upgrade to. Perform the procedures if needed.
  3. Run the Countly data transformation scripts one-by-one for each version up you upgrade to. In the example below, we are upgrading from 19.08.1 to 20.04.
  4. Start the new Countly containers.

Please replace the images below with the one you use:

docker run -u root -e COUNTLY_PLUGINS="ACTUAL,PLUGIN,LIST" -e COUNTLY_CONFIG__MONGODB="mongodb://ACTUAL_MONGODB_URI/countly" countly/frontend:20.04 bash -c "/opt/countly/bin/docker/; bash /opt/countly/bin/upgrade/20.04/ combined"


For the CentOS-based images earlier than 20.11.1 you'd also need to create a symlink (for Enterprise Edition image):

docker run -u root -e COUNTLY_PLUGINS="ACTUAL,PLUGIN,LIST" -e COUNTLY_CONFIG__MONGODB="mongodb://ACTUAL_MONGODB_URI/countly" bash -c "/opt/countly/bin/docker/; ln -s /opt/rh/rh-nodejs10/root/usr/bin/node /opt/rh/rh-nodejs10/root/usr/bin/nodejs; bash /opt/countly/bin/upgrade/20.04/ combined"  

Shutting down the Countly containers is not really necessary during most Countly upgrades, yet whenever you can shut them down, you should do so to ensure data consistency. In case shutting down the Countly containers is not an option for some reason, execute the db upgrades twice: before the first new container is launched and after the last new container is ready. This way you'll ensure the db transformations occurred for all the data, even the data being processed by the old containers when the new ones were still spinning up.

Adding Custom Plugins

In order to add custom plugins (or any non-standard plugins provided by Countly to you in the form of source code) you need to follow one of the following paths.

Extend one of our standard images (recommended)

The idea is that you create a new image based on our existing image. This way the image building process is relatively fast, you don't lose any of the features of our images, and don't need to care about how it works. Here's a sample Dockerfile:

FROM countly/centos-api:
USER root
COPY plugins/YOUR_PLUGIN /opt/countly/plugins/YOUR_PLUGIN
# any build-time installation procedures, i.e. dependency installation:
RUN cd /opt/countly/plugins/YOUR_PLUGIN && npm install
USER 1001:0

Here we add plugins/YOUR_PLUGIN folder to our new image and run npm install in it to install any required dependencies. Of course the Dockerfile above assumes you run it from countly source code folder which has plugins/YOUR_PLUGIN folder in it and that the base image version ( in this case) is the latest one. Note that in case of centos-api image, you'd also need to repeat the same for centos-frontend. Countly Enterprise customers would need to have Countly Enterprise image in the FROM clause. 

Build new image from scratch

In case you want full control over building process for some reason, it's also relatively simple. The only thing you need to do is to modify default COUNTLY_PLUGINS build-time argument of any of our Dockerfiles and to ensure that all the plugins you'd like to have in the image are present in plugins folder.


docker-compose.yml is available on our Github repository. It has the very basic setup of 1 countly/api container, 1 countly/frontend container, 1 mongodb container with data folder exposed as a volume, and 1 nginx used as a reverse proxy server exposing port 8080.

Starting it up is very simple:

curl -O
curl -O
sed -i '' 's/bin\/docker\/nginx.server.conf/nginx.server.conf/g' docker-compose.yml
docker-compose up

Once started (remember, it can take a minute or two for the very first start), your brand new Countly setup is available on port 80 (or any other port you set in docker-compose.yml).

docker stack

Our stack definition is also in docker-compose.yml, and is available on our Github repository. The only addition we have for docker stack is throttling the frontend container down to 0.5 CPU.


The Countly reference Kubernetes setup is based on the Google Kubernetes Engine, however, with a few modifications (SSD disks for MongoDB and static IP address for Ingress), it becomes applicable to any Kubernetes cluster.


The basic Kubernetes setup for Countly includes the following components: * the MongoDB replica set installed from the mongodb-replicaset Helm chart, backed by SSD disks. * countly-api service wrapping a countly-api-deployment with 2 countly/api pods; * countly-frontend service wrapping a countly-frontend-deployment with 1 countly/frontend pod; * countly-ingress Ingress in front of the services above.

Setting up a Kubernetes Cluster

The following assumes you have already set up kubectl & helm. The full script, including basic kubectl & helm configurations, is available in our Github repository.

First, create a countly namespace:

kubectl create ns countly
kubectl config set-context --current --namespace=countly

The installation and configuration of a MongoDB cluster are out of the scope of this document, so we'll just leave some starting points here for consistency (note, the storage-class.yaml below contains a GCE-specific provisioner and an SSD disk type):

cd countly/bin/docker/k8s
kubectl apply -f mongo/storage-class.yaml
helm install --name mongo -f mongo/values.yaml stable/mongodb-replicaset

Then we will need to create Countly deployments and services:

kubectl apply -f countly-frontend.yaml
kubectl apply -f countly-api.yaml

Note that the countly-api.yaml and countly-frontend.yaml deployments contain the configuration environment options we have covered above, including the MongoDB connection URL and the COUNTLY_PLUGINS environment variable.

Once Countly deployments are up and running, we'll also need to expose the setup to the outer world. This is done with the help of a static IP address and an Ingress configured to forward incoming requests either to the countly-apior countly-frontendservices. Our countly-ingress.yaml contains the TLS secret definition; please replace placeholders with your certificate, key, ​and hostname before the creation of Ingress:

gcloud compute addresses create countly-static-ip --global
kubectl apply -f gce/countly-ingress.yaml

Please note that the full Countly stack deployment and corresponding health checks can easily take 10-20 minutes, so give it some time.

The only thing left is the creation of a DNS A-record with an IP-address, which you can get by running:

kubectl get ing

Looking for help?