To be able to properly install Kubernetes v1.28 we read this page of the official Documentation.
As we prepared our virtual machines in previous parts, now let's get into Kubernetes installation.
Connect by SSH to your master 1 node:
ssh username|@|master.node.ip.address
First get key-rings to be able to read the sign of installation packets:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Now add official Kubernetes repository:
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Fine, we are ready to get repositories indexes and execute:
sudo apt update
When done, insall Kubernetes components kubectl, kubeadm, kubelet:
sudo apt install -y kubectl kubeadm kubelet
This will take some time to download more heavy packets. Last thing to do before initialising a cluster is to pull config images for Kubernetes, this will take approximately 5 minutes:
sudo kubeadm config images pull
Well, now we have prepared everything to initialise the cluster:
sudo kubeadm init \
--control-plane-endpoint=<ha.proxy.ip.address>:6443 \
--apiserver-cert-extra-sans=<ha.proxy.ip.address> \
--apiserver-advertise-address=<master.1.node.ip.address> \
--pod-network-cidr=10.10.0.0/16 \
--upload-certs
After cluster was initialised, kubeadm will show a typical message with joining commands. No need to execute them now. Just read and copy:
----
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join <ha.proxy.ip.address>:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<cert-hash> \
--control-plane --certificate-key <certificate-key>
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join <ha.proxy.ip.address>:6443 --token uqpvze.6fwlnnuej8k0wzzy \
--discovery-token-ca-cert-hash sha256:<cert-hash>
----
Copy this message some where in the text document of notes to be able to use these commands on next machines. By default token time to live is 24 hours, so take it in consideration when you will continue installing Kubernetes on the next nodes.
Ok, now let's create podnetwork in our cluster. There are quite a bunch of alternatives for this, you can read about them on the official Kubernetes Documentation page. In our case we will deploy Calico network components. Let's download it's yaml config on master 1 node:
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml -O
After that, apply to your cluster:
kubectl apply -f calico.yaml
Fine, cluster is provisioned. It has only one node now, but we will join other nodes soon. Go ahead and execute three commands from kubeadm message, where it says "To start using your cluster...". When done, you can check your cluster:
kubectl get nodes
This will show you master 1 node in the cluster and now you are ready to connect other nodes to it. To do it, make installation of Kubernetes v1.28 on other nodes using commands from this part of tutorial until cluster initialisation. Replace initialisation command with join command from kubeadm message choosing to join control-plane node or join worker node.