Getting started with AKS
Prior to containers, the application deployment was done using immutable infrastructure by building VM images with the application. These VM images would then be deployed into an infrastructure which has a load balancer setup to service incoming requests to these individual servers.
The drawback with this approach is that you now need to worry about keeping the VMs updated. In addition VMs were also pretty heavy in terms of size since each VM has a full guest OS installed on it just to service your application.
Containers like docker solved this problem by separating the application and the kernel where the application can be containerized and the OS virtualized. This also allows you to easily maximize your CPU utilization.
The next natural question was how to distribute the container images. To solve this, the solution was a container registry like docker hub or Azure container registry where developers can publish their application as a container and others can pull from this registry and run. Thus it’s very easy to integrate into your existing CI/CD pipelines to deploy to various environments.
In order to facilitate running all these containers across various machines you need an orchestrator and thus comes the need for Kubernetes (k8s).
Kubernetes High Level Architecture
Kubernetes at a very basic level provides a high level API surface that abstracts the underlying cluster resources ( compute, memory, network) for developers to interact with to run containers. Typically the developer uses a command line tool like kubectl which then talks to the API layer over http in order to orchestrate.
The two major segments of the cluster are the Control plane which provides the k8s API, maintains the state in its own distributed key value cluster datastore etcd, runs the scheduler which assigns the containers to worker nodes on the right. In addition it provides controllers which watch the state of the nodes / containers to ensure that the declared state is always met.
The worker nodes (VMs, usually VM scale sets in Azure) run the actual user application as a container using a container runtime like docker. There is a k8s agent named kubelet on each node, that monitors the state of the node and interacts with the control panel to run / update state. Kube-proxy deals with networking.
While running a k8s cluster on your own can be daunting, Azure simplifies this with the managed AKS cluster where the Control plane is fully managed by Azure. You don’t have to worry about managing that and thus your team can just focus on application development and deployments.
Let’s now look at the concepts involved in a typical Kubernetes deployment on AKS.
Worker Nodes run the actual user containerized app. They are typically VMs running on Hypervisor with each VM having a container runtime like docker, along with k8s agent (kubelet) and kube-proxy running.
In kubertnetes you can’t directly deploy a container to worker nodes. Instead you deploy a Pod. A Pod is basically a wrapper around your containerized app with some additional metadata defined. You define your Pod declaratively using a .yml file.
Here is a
- Simple .NET Core web app . You don’t need this for trying Kubernetes out.
- I have generated a docker image and published it to docker hub
- Below is the Pod.yml file that declaratively defines a name for the Pod, label for the app, name of the container, the image to run (from your container registry) and the container port at which the web app is running.
apiVersion: v1 <---- Specifies the version of k8s API
kind: Pod <---- Declares that this is a Pod definition
name: webapp-pod <---- Name of the Pod
app: webapp <---- Apply Label app = webapp
- name: webapp-ctr <---- Name of the container
image: ilabsllc/demoapp <--- Container image to run
- containerPort: 8080 <--- the port at which the webapp runs
kubectl apply -f Pod.yml is how you submit the file to the k8s API, which then creates a Pod REST object ( we will see more in later sections ). The scheduler then deploys it to the container based on the information from the Pod.
While you could simply deploy this Pod for just learning / test purpose, you typically use a Deployment to deploy. The Deployment wraps a Pod along with additional criteria like replicas to be met by the cluster. The deploy.yml below deploys the same Pod but with some additional conditions set. For example the replicas: 3 states that we would like to run 3 Pod instances.
- name: webapp-ctr
- containerPort: 8080
While deploying the above deploy.yml would create the Pod and run your container image across 3 nodes, there isn’t way to reach those web apps from outside the cluster or even within the cluster in a stable way. Remember that each Pod runs on a single node and thus get their own private cluster IP address. But Pods could be created and destroyed during the lifetime of the cluster based on scale up / scale down or new version rollouts etc. So we can’t rely on those individual IPs as they would keep changing.
Service is another object in the API that provides a stable cluster IP address and load balances the Pods selected by the service selector using the selector that matches a label. All you need to do is simply define svc.yml as follows.
app: webapp <-- Note that the selector should match label
- port: 80 <-- This is the port for the Service
targetPort: 8080 <-- This is your container port from Pod.yml
In addition we would enable an application gateway as part of the k8s cluster installation that we would see in detail shortly. So all we need to load balance across the pods using the service from the outside, is add an ingress controller that is defined by ingress.yml as below, which wraps around the application gateway in Azure.
- path: /
serviceName: svc-webapp <-- this should match your svc
servicePort: 80 <-- this should match your service port
Creating AKS cluster with Application Gateway
Okay enough of theory. Let’s install a AKS cluster. You can install azure cli on your machine or use azure cloud shell.
On mac you can just do
brew install azure-cli
Step 1: Login to azure
Step 2: Create a resource group
Choose location near you. I have chosen west-us
az group create --name AKSDemo --location westus
Step 3: Create AKS with AGIC (application gateway ingress controller)
[ Replace the bold letters with your own names.]
az aks create -n nameOfMyaks -g AKSDemo \
--network-plugin azure \
-a ingress-appgw \
--appgw-name nameOfMyk8sAppGateway \
--appgw-subnet-cidr "10.2.0.0/16" \
Creation of cluster does take sometime. So wait till it finishes.
Deploy your app
For this you need kubectl command line tool that talks to the k8s API. You can check the install instructions from https://kubernetes.io/docs/tasks/tools/ for your OS.
On mac you can just do
brew install kubectl to install. I just love the simplicity of brew on macs. Lets now go over the steps for deployment of our web app and making it available over the internet.
Step 1: Deploy your containerized app as Pods
Download the deploy.yml file even though kubectl would take the raw github url too. This way you get to understand it better by going over it in your local editor. Then run the following command.
kubectl apply -f deploy.yml
Now check the status by using kubectl get pods. You should be able to run it few times to watch that the pods eventually change state to Running relatively soon given this is a very simple web app.
Step 2: Deploy your service now.
Download the svc.yml file
kubectl apply -f svc.yml
Based on your svc definition it would now map to all the pods whose label app matches the value webapp. In our case that would be the 3 pods we first deployed. You can check this by running the following command.
kubectl get svc
Step 3: Deploy your ingress
Download the ingress.yml file
kubectl apply -f ingress.yml
This would take sometime to finish updating. So run the following command repeatedly until you get the “External IP” column to show up with the IP address.
kubectl get ingress
Test Your app !
Once the external IP shows up, paste it in your browser and voila you should see the sample web app !. All you need to do now is point your domain to this IP address.
Congratulations you have now successfully deployed a containerized .NET core web app on to a brand new AKS cluster, configured everything so that you can access them in a load balanced way from the internet.
Hope you found this as a great first step in learning k8s. Don’t forget to delete your cluster on Azure.
Shall post more advanced topics later in time.