Cloud Native development with Colima
27 October, 2024Inspired by this blog post and the talk that the author gave at KCD UK 2024, I thought I’d write up how I do “Cloud Native” software development on macOS.
The aforementioned post recommends using kind via Colima. There are very good reasons for doing this, such as that it can give you a multi-node environment in which to test things like Pod Disruption Budgets and affinity / anti-affinity rules. If that’s important to you, or if you’re using Windows or Linux, then you should use kind.
However, I want to write about the way in which I use Colima’s built-in mechanism for deploying Kubernetes (via K3s) and the benefits it brings.
Getting Started
As if it’s not obvious, you need to install Colima. Colima is an acronym of sorts for Containers on Linux for macOS. It’s a wrapper that leverages another project - Lima - to create the Linux virtual machines, install a container runtime, and configure a few things so that Docker works transparently from your perspective, i.e typing docker
in the Terminal in your Mac is behind the scenes talking to the runtime provisioned and managed by Colima in a virtual machine.
% colima start --cpu 4 --memory 8
INFO[0000] starting colima
INFO[0000] runtime: docker
INFO[0001] creating and starting ... context=vm
INFO[0013] provisioning ... context=docker
INFO[0014] starting ... context=docker
INFO[0015] done
% docker version
Client:
Version: 27.3.1
API version: 1.46 (downgraded from 1.47)
Go version: go1.22.7
Git commit: ce12230
Built: Fri Sep 20 11:38:18 2024
OS/Arch: darwin/arm64
Context: colima
Server: Docker Engine - Community
Engine:
Version: 27.1.1
API version: 1.46 (minimum version 1.24)
Go version: go1.21.12
Git commit: cc13f95
Built: Tue Jul 23 20:00:07 2024
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.7.19
GitCommit: 2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41
runc:
Version: 1.7.19
GitCommit: v1.1.13-0-g58aa920
docker-init:
Version: 0.19.0
GitCommit: de40ad0
% colima delete
are you sure you want to delete colima and all settings? [y/N] y
this will delete ALL container data. Are you sure you want to continue? [y/N] y
INFO[0002] deleting colima
INFO[0002] deleting ... context=docker
INFO[0004] done
~ took 4s
Pretty sweet.
Running Kubernetes
Colima has an option (--kubernetes
) to automatically install K3s for us, giving us a single-node Kubernetes cluster to work against. It also has another option which is sometimes overlooked - --network-address
- which will “assign (a) reachable IP address to the VM”. If we go ahead and supply them both, we get this:
% colima start --cpu 4 --memory 8 --network-address --kubernetes
INFO[0000] starting colima
INFO[0000] runtime: docker+k3s
INFO[0000] preparing network ... context=vm
INFO[0001] creating and starting ... context=vm
INFO[0012] provisioning ... context=docker
INFO[0013] starting ... context=docker
INFO[0014] provisioning ... context=kubernetes
INFO[0014] downloading and installing ... context=kubernetes
INFO[0020] loading oci images ... context=kubernetes
INFO[0026] starting ... context=kubernetes
INFO[0030] updating config ... context=kubernetes
INFO[0031] Switched to context "colima". context=kubernetes
INFO[0032] done
~ took 32s
% kubectl cluster-info
Kubernetes control plane is running at https://192.168.106.28:60923
CoreDNS is running at https://192.168.106.28:60923/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://192.168.106.28:60923/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
Note the 192.168.106.28
IP address - this is directly accessible via a bridge which Colima handily set up for us. We can ping it, we can SSH to it - whatever’s exposed is directly accessible as if it were a real server on the same network as our Mac. This is super convenient as it means we don’t have to worry about port forwarding on a per-service basis or anything like that.
Accessing Services
If we poke around a bit at the cluster, we can see this:
% k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
colima Ready control-plane,master 3m53s v1.30.2+k3s1 192.168.106.28 <none> Ubuntu 24.04 LTS 6.8.0-39-generic docker://27.1.1
So the node has an internal IP of the one we’d expect, i.e 192.168.106.28
- the one that’s directly accessible from our client (macOS) as if we were on the same network.
% k get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 4m34s
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 4m30s
kube-system metrics-server ClusterIP 10.43.212.130 <none> 443/TCP 4m29s
Usually the trick now is “how do I access services in my cluster”. Well, we can use a NodePort since we can just hit that IP address directly without having to do any additional port-mapping configuration. However, thanks to the magic of Klipper and K3s’s built in controller which ‘cheats’, we can also create Services of type LoadBalancer:
% helm install nginx ingress-nginx/ingress-nginx -n nginx-system --create-namespace
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /Users/nick/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /Users/nick/.kube/config
NAME: nginx
LAST DEPLOYED: Sun Oct 27 21:12:21 2024
NAMESPACE: nginx-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
% k get svc -n nginx-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-nginx-controller LoadBalancer 10.43.158.84 192.168.106.28 80:31731/TCP,443:32346/TCP 23s
nginx-ingress-nginx-controller-admission ClusterIP 10.43.214.173 <none> 443/TCP 23s
With this command we’ve installed the NGINX Ingress Controller which by default creates a LoadBalancer Service for us, and immediately that service has been assigned an External IP of our node. No need to install kube-vip or MetalLB or anything like that to advertise an additional IP address allocated from a range. If you’re curious as to how this works read the official K3s documentation on its ServiceLB (Klipper).
And of course it just works:
% http 192.168.106.28
HTTP/1.1 404 Not Found
Connection: keep-alive
Content-Length: 146
Content-Type: text/html
Date: Sun, 27 Oct 2024 21:43:22 GMT
A 404 is what we expect and shows that the service is reachable
Building and Deploying
Now for the real productivity win, and the single greatest reason to use Colima over kind (or Minikube for that matter). K3s and docker
on your Mac are using the same container runtime. This means that when you do docker build
locally, that image is immediately available in your cluster for use by Pods. No need to push them to a registry, no need to do the kind load image
stuff, just deploy or restart your Pods and they’ll pick up whatever you’ve built. Don’t believe me? Run docker ps
:
% docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
822f30a8c11d ghcr.io/dexidp/dex "/shared/argocd-dex …" 5 minutes ago Up 5 minutes k8s_dex-server_argocd-dex-server-7bb9bd65b-vcmbd_argocd_a9c2a790-081e-48b5-befa-5a746c31333d_0
127ddf19b4b3 public.ecr.aws/docker/library/redis "docker-entrypoint.s…" 5 minutes ago Up 5 minutes k8s_redis_argocd-redis-6d448d7776-nx4vm_argocd_0c162f16-af5d-4d65-8600-7ff9d02e0287_0
b132a5822c88 5f44ca1866c7 "/usr/bin/tini -- /u…" 5 minutes ago Up 5 minutes k8s_repo-server_argocd-repo-server-7599cbc96c-gvsdz_argocd_e2c25a1b-6ff6-4700-8de0-d307b7205151_0
da2bde2abb0a 5f44ca1866c7 "/usr/bin/tini -- /u…" 5 minutes ago Up 5 minutes k8s_application-controller_argocd-application-controller-0_argocd_0b5dad13-85ce-4300-baeb-befcea753a44_0
ce6d0dd3c76f 5f44ca1866c7 "/usr/bin/tini -- /u…" 5 minutes ago Up 5 minutes k8s_server_argocd-server-fc4d58c47-qb826_argocd_f6842f6d-7d82-4698-8a0a-f97e97243db8_0
ee8ed37f6652 5f44ca1866c7 "/usr/bin/tini -- /u…" 5 minutes ago Up 5 minutes k8s_applicationset-controller_argocd-applicationset-controller-578d744fc9-bw4t8_argocd_429671ab-68ae-4206-9dcd-3c97a9364e5e_0
[..]
The output of docker ps
shows the Kubernetes containers that are running. And docker images
will show the stuff that was pulled down as part of K3s being installed:
% docker images
[..]
rancher/mirrored-metrics-server v0.7.0 5cd7991a1c72 9 months ago 65.1MB
rancher/mirrored-library-busybox 1.36.1 3fba0c87fcc8 17 months ago 4.04MB
rancher/mirrored-coredns-coredns 1.10.1 97e04611ad43 20 months ago 51.4MB
rancher/mirrored-pause 3.6 7d46a07936af 3 years ago 484kB
[..]
Here’s a real world example. Let’s say I’m making changes to Unikorn’s identity service. I’ve deployed it to my cluster already, and I’ve specified that I want to deploy images with the tag 0.0.0
. The Makefile
has an images
target, and unless overidden the VERSION
is also 0.0.0
, meaning if I make images
it’ll build container images with my changes and tag them with 0.0.0
, effectively matching what’s being deployed in my cluster. It’s analogous to just specifying latest
, basically.
Anyway, I’ve got my identity-related Pods running:
% k get pods -n unikorn-identity
NAME READY STATUS RESTARTS AGE
unikorn-identity-58b99bd677-tklf2 1/1 Running 0 2m58s
unikorn-organization-controller-7ddf87bd96-z8qmd 1/1 Running 0 2m58s
unikorn-project-controller-5d9948fc96-4lt77 1/1 Running 0 2m58s
Now I can hack away and as soon as I’m ready to build and test my changes in my cluster, I can just do make images
and then just restart the relevant Pods:
% gmake images
if [ -n "" ]; then docker buildx create --name unikorn --use; fi
for image in unikorn-identity unikorn-organization-controller unikorn-project-controller; do docker buildx build --platform linux/arm64 --load -f docker/${image}/Dockerfile -t ghcr.io/unikorn-cloud/${image}:0.0.0 .; done;
[..]
% k rollout restart deploy unikorn-{identity,{organization,project}-controller} -n unikorn-identity
deployment.apps/unikorn-identity restarted
deployment.apps/unikorn-organization-controller restar
deployment.apps/unikorn-project-controller restarted
And that’s it, as soon as the new versions of my Pods are up and running my changes are live.
Improvements
I know of Tilt and this could be used to streamline things even further by triggering Deployment restarts on a build event. I should also mention that the Makefile for the various Unikorn services also has kind-specific targets, since the primary author’s desktop is Linux and Colima is a little more limited on that OS. If you’ve any other suggestions let me know.