Azure Stack HCI with Kubernetes – part 2

Azure Stack HCI with Kubernetes – part 2

3 januari 2021

Tags:

Azure Stack HCI with Kubernetes – part 2

 

Introduction to Virtual machines and Containers 

Back in 2016 Microsoft released a new type of OS called Nano Server and the Windows Container feature. Kubernetes had just been released and Docker was already working for some time on containers. While back in 2016 it was all about the jokes with shipping containers and garbage containers. Since then, container usage started to grow and has been adopted by all big vendors on a large scale. It has become yet another game changer in today’s IT infrastructures and application development. 

Today all the big cloud providers like Microsoft with Azure, Amazon with AWS and Google with GCP offer containers based on Docker and Kubernetes. If you want to run containers yourself in your own datacenter you can use Docker, Kubernetes, Windows Containers with Docker on Windows or Linux.  

On the other end, virtual machines are common good these days and will stay with us for a long time. Because not everything can be containerized or is not relevant in a containerized way. Therefore, wouldn’t it be great if you could share your infrastructure to run both Windows and Linux VMs and Windows and Linux Containers?

Microsoft released Azure Stack HCI & AKS for Azure Stack HCI, these products give you the ability to run containers and VMs on your datacenter hardware. Managed and deployed through the Azure Portal and Windows Admin Center.

In this blog we’ll talk a little bit about Kubernetes and how it works. But also, the possibilities we have with Azure, Azure Arc, Azure Stack HCI as virtualization and storage platform to run VMs, and containers managed by Kubernetes.

 

Virtual Machines

With a virtual machine the hardware is virtualized, and the operating system is running on top of virtual hardware instead of the physical hardware. Inside the OS you can practically do everything as on a physical computer. The VM is running on top of a virtualization host along with multiple other VMs.

On a decent virtualization platform, we want to make sure that VMs are high available. In the case of a failure of a host the VM is quickly moved to another system and booted. In a matter of seconds the VM is back with access and functionality restored. For this to work we need shared storage. This can be in various ways like traditional SAN with Fiber or ISCSI access or Hyperconverged like Storage Spaces Direct. In addition, we need a cluster service to make sure that when a node fails the other systems detects it, and takes action. Within Windows the Failover Clustering feature takes care of this.

 

Containers

When we look at a container there is some overlap. A container is an isolated, lightweight instance to run an application on the host operating system. This host can be a physical machine or a virtual machine. Containers are built on top of the host operating system’s kernel and contain only apps and some lightweight operating system APIs and services that run in user mode. If you have a Windows VM with docker you can deploy Windows containers. On a Linux VM you can deploy Linux containers. Because it shares the kernel you cannot mix Windows and Linux Containers on the same underlaying OS.

Virtual Machines

For containers and VM the same applies, we want the application running inside it to be highly available in case something fails. This is where things get different with VMs and containers. With VMs we have the failover cluster manager to manage and detect failures and take actions accordingly. With containers we don’t use the failover cluster manager because the management of deploying, rebuilding, and so on is done by another management tool. Here comes container orchestrator tools such as Kubernetes into play.

 

Kubernetes and Fail-over Clusters

With VMs and containers the same rule applies. Threat them as cattle not as pets, meaning that you don’t want to have too much dependency on them.

VMs are bigger in size and contain persistent data. If we would destroy it or spin up a new one it takes more time and you potentially could lose data. That’s why they are stored on shared storage. In case of a failure the failover cluster manager boots the VM on another host, which also can access that shared storage, and its up and running again.

Containers are very small and, in most cases, they don’t contain any data. It is easier and faster to just deploy new ones. Container orchestrator platforms like Kubernetes take care of this. It detects when containers are down and spins up new one on another hosts and makes sure it’s accessible.

 

Kubernetes

Kubernetes manages the deployments of resources (not only containers). Kubernetes has several objects and building blocks it uses to deploy, manage and publish the resources which we will deep dive in to in another blog. For now, it is important to know Kubernetes consist of a management cluster (control pane) with master nodes and additional worker nodes to run workloads. 

 

Master Nodes

A production Kubernetes cluster requires a minimum of 3 master nodes. The master nodes manage the deployment of various components required to deploy containers and be able to communicate with them. It also provides an API layer for the workers to communicate with the masters. The API is also used to deploy workloads. The master nodes can run on physical or virtual machines and can only run on a Linux based OS. 

 

Worker Nodes

The worker nodes are used to run the container workloads. Worker nodes are also known as Minions….. 

Virtual machines minions

Let’s hope these minions behave better than the yellow dudes and don’t turn it all into chaos…

The worker nodes can be either Linux or Windows. The Windows option gives us a lot of flexibility with Azure Stack HCI, but before we go down that path, we dive a little deeper in the Kubernetes on Windows requirements first.

 

Worker Nodes on Windows

To be able to add Windows Workers to a Kubernetes cluster, the Windows worker must run Windows Server 2019 or Azure Stack HCI OS at minimum and a Kubernetes version of 1.17 or above. In addition to that, the Windows Containers feature and Docker are required. There are other container engines available, but Docker is widely used and has the best support for Windows, so we recommend using Docker. Besides the previous requirements we also need some additional things like networking and storage on the worker nodes which we will discuss in the next parts of this blog series. Once we have the requirements setup, we have a working Windows worker capable of running containers deployed and managed by Kubernetes.

 

Windows and Linux Containers 

As described earlier in this blog you cannot mix different container OSes on the host. But that is only true for Linux workers. A Linux worker node cannot run Windows containers. But a Windows Worker can run both Windows and Linux containers due to the feature WSL (Windows Subsystem for Linux). With a Kubernetes cluster and Windows Workers nodes or let’s say Mixed worker nodes you can run both Linux and Windows containers and that is a great opportunity!    

 

Azure Stack HCI & Azure Kubernetes Service (AKS)

Azure Stack HCI is the Microsoft Hyper-converged infrastructure offering which is the basis for a software-defined datacenter. HCI brings together highly virtualized compute, storage, and networking on industry-standard x86 servers and components.

With Azure Stack HCI we are able to create a robust platform to host virtual machines, and simultaneously these virtual machines are the foundation for a robust container platform. Because Azure Stack HCI makes use of clustering, it’s also suitable to host the Kubernetes cluster itself, making sure that the VMs hosting the Kubernetes cluster are spread among physical machines to reduce downtime.

Microsoft has released Azure Kubernetes Service on Azure Stack HCI to save you from the hassle setting up Kubernetes yourself. Just as in Microsoft Azure, with AKS, you get your own Kubernetes clusters deployed and managed by Microsoft, but in your own datacenter. This brings a lot of advantages to the table such as latency or data locality.

aks azure architecture

aks hci architectur

Getting started with AKS on Azure Stack HCI

Read more about AKS on Azure Stack HCI on the Microsoft Docs page here.

 

To get started and download you can head over to the preview registration page here.

Microsoft released a great blog post on how Kubernetes in intertwined with Azure Stack HCI and the storage components: https://techcommunity.microsoft.com/t5/azure-stack-blog/. It explains the basics and how to get started using Windows Admin Center. 

 

Do you want to consultation how AKS on HCI matches your challenges? Reach out