This is just darling:
kube-system kube-router-7v944 0/1 CrashLoopBackOff 10 11h 192.168.1.172 kube02.svealiden.se <none> <none>
default grafana-67d6bc9f96-lp2fk 0/1 Running 3 11h 10.32.1.90 kube03.svealiden.se <none> <none>
default pdnsadmin-deployment-b65c568dd-kd7x4 0/1 Running 8 31d 10.32.0.92 kube02.svealiden.se <none> <none>
kube-system kube-router-nrz6v 0/1 CrashLoopBackOff 10 11h 192.168.1.173 kube03.svealiden.se <none> <none>
kube-system kube-router-9mmfc 0/1 CrashLoopBackOff 10 11h 192.168.1.171 kube01.svealiden.se <none> <none>
default zbxserver-b58857598-njf26 0/1 Running 5 23d 10.32.0.90 kube02.svealiden.se <none> <none>
default pdnsadmin-deployment-b65c568dd-rdtft 0/1 Running 11 11h 10.32.2.113 kube01.svealiden.se <none> <none>
default pdnsadmin-deployment-b65c568dd-s2w4n 0/1 Running 5 11d 10.32.1.93 kube03.svealiden.se <none> <none>
default grafana-67d6bc9f96-ws7dw 0/1 Running 6 27d 10.32.0.89 kube02.svealiden.se <none> <none>
Kube-router is the connection-fabric for pods. So all instances being down is suboptimal. Turns out the file that kube-router needs to connect to Kubernetes couldn’t be found:
[root@kube01 ~]# mkctl logs -f kube-router-lrtxp -n kube-system I1126 07:44:26.337591 1 version.go:21] Running /usr/local/bin/kube-router version v1.3.2, built on 2021-11-03T18:24:15+0000, go1.16.7 Failed to parse kube-router config: Failed to build configuration from CLI: stat /var/lib/kube-router/client.config: no such file or directory
This was a surprise to me since I hadn’t changed any config. I know because I was asleep! None of this is critical stuff so it’s no biggie but I get kind of curious. Was this a microk8s-thing or a Kubernetes-thing happening? I suspect it’s a microk8s-thing having to do with the path mounted to /var/lib/kube-router/ referencing a specific snap-version of microk8s. Not that I upgraded it while asleep – admittedly – but seems more likely than Kubernetes fiddling with a deployment configuration randomly.
Anyway… Think I’m going to get myself acquainted with Nomad and Consul for a while…
Addendum: Kubernetes is back up and running by the way. I just had to run mkctl edit ds kube-router -n kube-system a couple of times and fiddle some values back and forth.