Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. The master node is managed by Azure and the worker nodes are managed by the end users. AKS Cluster is a Kubernetes cluster, which is created on the Azure Kubernetes Service and can be accessed from a local machine’s terminal to manage Kubernetes components like Deployments, Services, and pods.
To get started, you must first have:
- Azure Portal access with Active subscription and
- Azure CLI to access Node and Services from a local machine, along with Kubectl & Helm.
Creating an AKS Cluster
- Go to Kubernetes Service using the search bar on the Azure portal.
- Create a Kubernetes Cluster via the Create tab.
- Configure the following options on the Basic page:
- Select the Subscription and the Resource group in Project details.
- Configure the Cluster details:
- Configure the Cluster preset configuration to Standard, unless you have specific settings depending on your preferred choice.
- Set your Azure Kubernetes service cluster name in Kubernetes cluster name, and your preferred deployment region in Region of Master node, along with the High availability offering in Availability Zones.
- Set the default Kubernetes Version. Note that the version can be upgraded after the creation of the cluster.
- Configure the Node Size and the Scaling method of the Primary node pool:
- Based on our Data Points estimation we select the Node size. This is the size your Kubernetes worker nodes will be and directly correlates with the initial size of your Kubernetes cluster. Use the table in the Change size UI to determine which node type has the CPU and memory requirements you will need.
- Select Scale Method to Autoscale to ensure that the cluster is running efficiently with the right number of nodes for the workloads present.
- Set the Node count range. For production workloads, at least 3 nodes are recommended for resiliency. For development or test workloads, only 1 node is required.
- Proceed to Node Pools via Next: Node Pools.
- Keep the default Node pools options and go to Authentication via Next: Authentication.
- Configure Authentication of the Kubernetes Cluster:
- In Cluster infrastructure, select System-assigned managed identity as an Authentication method so that additional resources like load balancers and managed disks in Azure can be handled by AKS automatically using a managed identity.
- Enable the Role-based access control (Kubernetes RBAC) option to provide more fine-grained control over access to the Kubernetes resources deployed in your AKS cluster.
- Proceed with Default Encryption type and then go to Networking via Next: Networking.
- Configure Networking of the Kubernetes Cluster:
- Select the Kubenet Network configuration default option to assign network resources to pods.
- Assign DNS name prefix. It is used to connect to the Kubernetes API when managing containers after creating the cluster.
- Keep Default options for the rest of the settings and go to Integrations via Next: Integrations.
- Enable the Container monitoring in the Integrations tab and keep the Default options for the rest of the settings. Then, go to Tags via Next: Tags.
- Add the Tags to categorize resources and view the consolidated billing by applying the same tag to multiple resources and resource groups. Then, review the setting via Next: Review + Create.
- Create the cluster upon successfully passing the Validation test of the Kubernetes Cluster, based on the settings above.
It will take a few minutes to initialize and create the AKS cluster. After the creation is completed, the AKS cluster will be available in the Kubernetes Resource tab.
Connecting to the Azure Kubernetes Cluster
After creating the cluster as mentioned above, you can view the details of the cluster in the Kubernetes Service tab in the Microsoft Azure Portal. Click on the Name of the cluster you created to find more details about the Kubernetes Cluster and Configuration.
To connect to the Kubernetes cluster using Azure CLI, use the commands below:
1. Set the subscription:
az account set --subscription <Subscription ID>
2. Once you connect to your Kubernetes cluster using the az aks get-credentials command after setting the subscription, use the following command to download the credentials and to configure the Kubernetes CLI to use them:
az aks get-credentials --resource-groups <name of resource group> --name <name of cluster
3. Verify the connection to your cluster using the
Kubectl get nodes command to return a list of the cluster nodes:
kubectl get nodes
Deploying Countly Application on Kubernetes Cluster
The following assumes you have already set up
helm. Serviced, Deployments, and Ingress resource configurations is available in our Github repository.
- Firstly, create a namespace "Countly" and set it as default to deploy the services and application pods in the Countly namespace to isolate the resources in a single cluster, as shown below:
- After creating the namespace, create a storage class with an Azure-specific provisioner and Disk type.
kubectl apply -f storageclass.yaml
kubectl get storageclass
- Install MongoDB and set up a replica set configuration prior to installing Countly's API and Frontend pods, as plugins installation is dependent on MongoDB. Use the commands below:
helm install mongo -f mongo/values.yaml stable/mongodb-replicasetTo verify the installation, check the pods generated for MongoDB, as shown below:
kubectl get pods
- Before deploying the Countly application containers, create a Kubernetes Secret to authenticate to and access the Enterprise Edition Docker Images from our Private Google Container Registry.
To create the Secret, refer to this Guide.
Step available only in Countly Enterprise Edition.
- Once the MongoDB pods are running, create Countly Deployments and Services for the API and the Frontend.
countly-api.yamlneed to be edited with a key:value pair to configure pods with relevant values (Refer env config guide), in the env section:
- name: COUNTLY_PLUGINS
value: "mobile,web,desktop,some,more,plugins" #<Enterprise or Community Plugins>
- name: COUNTLY_CONFIG__FILESTORAGE
- name: COUNTLY_CONFIG__MONGODB
value: "mongodb://some.mongo.host/countly" #<Mongodb pod connection names>
- name: COUNTLY_CONFIG_HOSTNAME
value: countly.example.com #<Domain name required as url>
- name: COUNTLY_CONFIG__MAIL_TRANSPORT
- name: COUNTLY_CONFIG__MAIL_CONFIG_HOST
- name: COUNTLY_CONFIG__MAIL_CONFIG_PORT
- name: COUNTLY_CONFIG__MAIL_CONFIG_AUTH_USER
- name: COUNTLY_CONFIG__MAIL_CONFIG_AUTH_PASS
kubectl apply -f countly-frontend.yaml
kubectl apply -f countly-api.yaml
- Once Countly Service and deployments are up and running, you will also need to expose the setup to the outer world so that it can be accessible publicly.
This can be done by setting up an ingress resource configured to forward all incoming requests either to the Countly-API or to the Countly-frontend services based on the route defined.
To do this, enable Application Gateway ingress controller in the Networking tab of our Cluster --> create Application Gateway.
- After enabling the Ingress controller, create the Kubernetes secret to enable SSL for the URL mapped with your service. The command below will help create a TLS Secret:
kubectl create secret tls <add name to secret> --key <path-to-key> --cert <path-to-cert>
kubectl get secret #To view the secret created
- After generating the TLS Secret, create the Ingress resource to route the traffic based on the path configured for the Countly application:
apiVersion: extensions/v1beta1To view the Ingress created, use the command below:
- secretName: <Secret name created in 6th step>
- host: <Hostname>
- path: /i
- path: /i/*
- path: /o
- path: /o/*
- path: /*
kubectl get ingress
The final step would be to map the DNS A record with the IP address associated with Ingress.