Docker & Kubernetes



Starting from version 19.08.1 Countly fully supports Docker-based setups in production. Countly can be run in:

The first way, that is "Plain Docker", bundles together both Countly processes, Nginx & MongoDB. It's made for evaluation purposes and is not supposed to be used in production for obvious reasons.


Taking the evaluation image apart, there are 2 Countly images:

  • countly/api (Dockerfile-api) - API process responsible for handling data requests, which should be scaled accordingly.
  • countly/frontend (Dockerfile-frontend) - Frontend (dashboard) process, 1 container is enough for almost any use case.

Enterprise Edition

We also provide Enterprise versions of the Docker images above to our customers.

Both images can be fully configured through environment variables. Both require no storage and expose only one HTTP port: api exposes port 3001, while frontend exposes port 6001.

Since there are 2 images, running 2 containers is not enough for a full setup - you need a way to forward requests coming to Countly host to either api, or frontend. In standard non-Docker setup, we use Nginx for this task. In case of using docker-compose or docker stack we have you covered with pre-configured Nginx containers. For Kubernetes case, we have a reference Ingress implementation.

And of course, you still need to have MongoDB server running. Just like with Nginx, we've made reference implementations: simple one-node setup for docker-compose / docker stack & a Replica Set setup for Kubernetes.


Countly containers can be configured in 3 ways:

  1. By specifying configuration options in api/config.js & frontend/config.js files of the containers themselves.
  2. By passing environment variables to the corresponding container.
  3. A combination of both methods above.

For details regarding the first configuration option, please refer to Configuring Countly. In the case of docker, you can easily modify configuration options in a running container and commit it as a new image.

Environment variables override whatever is set in config.js files. Each config.js has a separate env variable prefix: COUNTLY_CONFIG_API_ for api container & COUNTLY_CONFIG_FRONTEND_ for frontend one, yet it's possible to use universal prefix COUNTLY_CONFIG__ (preferred, note double underscores in the end). Actual configuration option name comes right after the corresponding prefix. The rule is simple: open a config.js file you need, for example, api/config.js:

var countlyConfig = {
  mongodb: {
    host: "localhost",
    db: "countly",
    port: 27017,
    max_pool_size: 500
  api: {
    port: 3001,
    host: "localhost",
    max_sockets: 1024,
    timeout: 120000

... then write any variable path in CAPS using _ as a delimiter. For example if we want to decrease max_pool_size from 500 to 200, that would be COUNTLY_CONFIG_API_MONGODB_MAX_POOL_SIZE=200. Or, change database name: COUNTLY_CONFIG_API_MONGODB_DB="test_db". Values are read as JSON, so in case you need to specify an array, that would be COUNTLY_CONFIG_API_MONGODB_REPLSETSERVERS="[\"mongo-1:27017\", \"mongo-2:27017\"]".

Again, please apply all COUNTLY_CONFIG_API_ env variables (unless you're sure they're not needed) to frontend container and vice versa when you don't use universal prefix COUNTLY_CONFIG__. The bare minimum for all setups (for both: API & Frontend) would look like this:

   value: "mobile,web,desktop,some,more,plugins"
   value: "gridfs"
   value: "mongodb://"

In case you want to configure each config.js separately, the same config would look like:

   value: "mobile,web,desktop,some,more,plugins"
   value: "gridfs"
   value: "mongodb://"
   value: "mongodb://"


In non-Docker environment Countly ships with Sendmail to send emails. As it'd be a substantial overhead to have a Sendmail server in each container, we decided to remove it and to fall back to a simple SMTP mailer instead. In order to configure it, you need to specify a set of environment variables:

 value: "nodemailer-smtp-transport"
 value: ""
 value: 25
 value: "example-user"
 value: "example-password"

COUNTLY_CONFIG__MAIL_TRANSPORT here is a Node.js module to use (only SMTP one is usable in Docker). All the COUNTLY_CONFIG__MAIL_CONFIG options form the config object passed to the nodemailer module, in the example above it'd be {host: '', port: 25, auth: {user: 'example-user', pass: 'example-password'}}. In case you need more precise `nodemailer` configuration, you can pass an arbitrary JSON string to the COUNTLY_CONFIG__MAIL_CONFIG environment variable. For example, in case you don't want TLS and authentication, yet your mail server goes `STARTTLS` path:

 value: "nodemailer-smtp-transport"
value: '{"host": "…", “port": 25, "ignoreTLS": true}'

This approach supports any configuration supported by nodemailer-smtp-transport, for example to configure Postmark you can use service (Postmark also requires you to override "FROM" address with an existing signature):

 value: "nodemailer-smtp-transport"
 value: "Postmark"
 value: "99999999-8888-7777-6666-555555555555"
 value: "99999999-8888-7777-6666-555555555555"
 value: "Countly <>"

In case SMTP is not enough for your case or you'd like to customize email templates, please refer to Using a 3rd party email server. For Docker, it'd require modifying our Docker images with customized versions of mail.js.


Choosing plugins at runtime

For Docker-based installations, it's impossible to change the list of plugins at runtime. In case you need to enable or disable some plugins, you'll have to drop existing containers and start new ones with a new COUNTLY_PLUGINS environment variable value.

The most important configuration option for both docker images is the COUNTLY_PLUGINS environment variable. It's a plain comma-separated list of plugin names and contains a list of plugins Countly should use. By default, all plugins are enabled.

Our images have all available plugins prebuilt, meaning they are ready to use from a dependencies point of view. Yet the frontend needs actual plugin list to build production client-side JS/CSS files. Also, some of the plugins require a running MongoDB database to be correctly installed, therefore it cannot be done at image build phase - your database is required.

Both images finalize plugins installation at the first container launch. Note, that because of this installation finalization phase, the very first container launch usually takes 1-2 minutes. All consecutive launches will be much faster. If the delay is not an option for your particular case, you can pre-build each image and commit it as your own image to a container registry of your choice:

export CLY_VERSION=19.08.1
export CLY_DOCKER_HUB_USER=$(whoami)

docker run --name countly-api-prebuild \
	-e \
	-e COUNTLY_PLUGINS=mobile,crashes,push \
docker commit countly-api-prebuild "${CLY_DOCKER_HUB_USER}/countly-api:${CLY_VERSION}"
docker push "${CLY_DOCKER_HUB_USER}/countly-api:${CLY_VERSION}"

docker run --name cly-frontend-prebuild \
	-e \
	-e COUNTLY_PLUGINS=mobile,crashes,push \
docker commit countly-frontend-prebuild "${CLY_DOCKER_HUB_USER}/countly-api:${CLY_VERSION}"
docker push "${CLY_DOCKER_HUB_USER}/countly-frontend:${CLY_VERSION}"


In some cases like email reports Countly API needs to access Frontend or vice versa. To make it possible, please set COUNTLY_CONFIG_HOSTNAME variable with your planned Countly hostname.


Upgrading Docker-based Countly installations is slightly different from standard procedures. For obvious reasons, you don't need to download new sources, but MongoDB version upgrades (if any) and data transformations still have to be made. Therefore, upgrading procedure would look like:

  1. Go to Upgrading Countly server and check if any special procedures must be made for the version you upgrade to.
  2. Shut down Countly containers.
  3. Upgrade your MongoDB instance or cluster if needed.
  4. Run Countly data transformation scripts one-by-one for each version up you upgrade to. In the example below we upgrade from 19.02 to 19.08.
  5. Start new Countly containers.
docker run -e COUNTLY_PLUGINS="ACTUAL,PLUGIN,LIST" -e COUNTLY_CONFIG__MONGODB="mongodb://ACTUAL_MONGODB_URI/countly" -e countly/frontend:latest bash -c "/opt/countly/bin/docker/; bash /opt/countly/bin/upgrade/19.08/ combined"


Shutting down Countly containers is not really necessary during most Countly upgrades, yet whenever you can do it you should do it to ensure data consistency. In case shutting down Countly containers is not an option for some reason, execute db upgrades twice: before the first new container is started and after the last new container is ready. This way you'll ensure db transformations did occur for all data, even the data being in processing by old containers when new ones were still spinning up.


docker-compose.yml is available on our Github repository. It has a very basic setup of 1 countly/api container, 1 countly/frontend container, 1 mongodb container with data folder exposed as a volume and 1 nginx used as a reverse proxy server exposing port 8080.

Starting it up is very simple:

curl -O
curl -O
sed -i '' 's/bin\/docker\/nginx.server.conf/nginx.server.conf/g' docker-compose.yml
docker-compose up

Once started (remember, it can take a minute or two for the very first start), your brand new Countly setup is available on port 80 (or any other port you set in docker-compose.yml).

docker stack

Our stack definition is also in docker-compose.yml available on our Github repository. The only addition we have for docker stack is throttling frontend container down to 0.5 CPU.


Countly reference Kubernetes setup is based on Google Kubernetes Engine, but with a few modifications (SSD disks for MongoDB & static IP address for Ingress) it is applicable to any Kubernetes cluster.


Basic Kubernetes setup for Countly includes the following components: * MongoDB replica set installed from mongodb-replicaset Helm chart backed by SSD disks. * countly-api service wrapping a countly-api-deployment with 2 countly/api pods; * countly-frontend service wrapping a countly-frontend-deployment with 1 countly/frontend pod; * countly-ingress Ingress in front of the services above.

Setting up Kubernetes cluster

The following assumes you have already set up kubectl & helm. The full script, including basic kubectl & helm configuration is available in our Github repository.

First, create countly namespace:

kubectl create ns countly
kubectl config set-context --current --namespace=countly

Installation and configuration of MongoDB cluster is out of the scope of this document, so we'll just leave some starting points here for consistency (note, that storage-class.yaml below contains GCE-specific provisioner and SSD disk type):

cd countly/bin/docker/k8s
kubectl apply -f mongo/storage-class.yaml
helm install --name mongo -f mongo/values.yaml stable/mongodb-replicaset

Then we need to create Countly deployments & services:

cd countly/bin/docker/k8s
kubectl apply -f countly-frontend.yaml
kubectl apply -f countly-api.yaml

Note that countly-api.yaml & countly-frontend.yaml deployments contain the configuration environment options we have covered above, including MongoDB connection URL & COUNTLY_PLUGINS environment variable.

Once Countly deployments are up and running, we'll also need to expose the setup to the outer world. This is done with the help of static IP address and an Ingress configured to forward incoming requests either to countly-api, or to countly-frontend services. Our countly-ingress*.yaml contain TLS secret definition, please replace placeholders with your certificate, key, ​and hostname before ingress creation:

gcloud compute addresses create countly-static-ip --global
kubectl apply -f countly-ingress-gce.yaml

Note that full Countly stack deployment and corresponding health checks can easily take 10-20 minutes, so give it some time.

The only thing left is the creation of DNS A-record with an IP-address you can get by running:

kubectl get ing
Was this article helpful?
0 out of 0 found this helpful

Looking for help?