This was an attempt to get K8s running on the linux Mint servers again. But I gave up in the end because Flannel would not work due to a DNS loopback bug. BUT, Raspberry Pi launched the 4 too, so I can turn the Mint servers off and experiment with the pi clusters again. Installing onto the PIs was a doddle.
Anyway, notes made from the failed attempt - because DNS stayed in CrashLoopBackoff and I gave up.
Bare metal Ubuntu Kubernetes May 2019
I left the K8s alone fo some months and the install nuked itself. So, it has been at least a year since I first installed it, lets start again.
Tear down Kubernetes
The correct details are here:
This works too:
sudo kubeadm reset apt remove --purge kube*
Tear down Docker
Install docker again
I’m doing this on two linux servers.
cd ~/dev/getdocker curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh sudo usermod -aG docker jonathan docker run hello-world
Install the master and the slave
So, in summary, you need to follow the guide!
Key features, kubeadmin, kubectl, flannel, and then setting it up to work with Docker and sorting out the silly swap issue.
You need to configure it for Docker, don’t forget. https://kubernetes.io/docs/setup/cri/
If you want to use kubectl get nodes etc from both master and slave, then copy the ~/.kube/config from the master box to the user on the slave, then you can run the commands on the slave as well.
Dealing with errors
“It seems like the kubelet isn’t running or healthy.”
The tools you use are:
systemctl status kubelet journalctl -xeu kubelet
For instance you can see from the journalctl for the above error that it is the swap issue, in which case you can:
sudo swapoff -a sudo sed -i '/ swap / s/^/#/' /etc/fstab
Or add the line below to /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Stupid dns pods in CrashLoopBackOff
Seems that dnsmasq is a local dns cache, and the file /etc/resolv.conf shows nameserver 127.0.1.1. This somehow messes with the K8s dns loopback detection.
In the end, tried this:
sudo nano /etc/NetworkManager/NetworkManager.conf // comment out dns=dnsmasq sudo systemctl restart network-manager kubectl -n kube-system delete pod -l k8s-app=kube-dns kubectl get pods --all-namespaces
Ho hum, still broken. so reverted. Maybe try this:
Add the following to your kubelet config yaml: resolvConf:
No luck, dang. Other commands
sudo systemctl status dnsmasq kubectl get pods --all-namespaces kubectl log coredns-zzzzzzz -n kube-system kubectl describe pod coredns-zzzzzzz -n kube-system
This is pretty interesting
ps auxww | grep kubelet
Also, perhaps: https://www.bountysource.com/issues/50010096-dnsmasq-pod-crashloopbackoff
sudo systemctl stop systemd-resolved sudo systemctl disable systemd-resolved
Then edit /etc/resolv.conf to have
Other useful commands
Start again with kubelet and kubeadmin
kubeadm reset systemctl daemon-reload systemctl restart kubelet
Restarting for whatever reason
systemctl restart docker systemctl restart kubelet
Need a new kubernetes master join token
kubeadm token list kubeadm token create --print-join-command