Change to root:
sudo su
Turn off swap: To do this, you will first need to turn it off directly …
swapoff -a
… then comment out the reference to swap in /etc/fstab. Start by editing the file:
nano /etc/fstab
Then comment out the appropriate line, as in:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/dm-uuid-LVM-W0C3VzBJbs6WPOkDRZyucwwkYeHnydeaHiM0Ralee7dstLYLUp0P1LGH6yd3h7wo / ext4 defaults 0 0
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/f5e207f7-b282-4684-8966-8ddb1e765db2 /boot ext4 defaults 0 0
# /swap.img none swap sw 0 0 <- this line
Now configure iptables to receive bridged network traffic. First edit the sysctl.conf file:
nano /etc/ufw/sysctl.conf
And add the following lines to the end:
net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1
Reboot so the changes take effect. Install ebtables and ethtool:
sudo su
apt-get install ebtables ethtool
Reboot once more.
Install Docker:
sudo su
apt-get update
apt-get install -y docker.io
Install HTTPS support components (if necessary):
apt-get update
apt-get install -y apt-transport-https
Install Curl (if necessary):
apt-get install curl
Retrieve the key for the Kubernetes repo and add it to your key manager:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
Add the kubernetes repo to your system.
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
Actually install the three pieces you’ll need, kubeadm, kubelet, and kubectl:
apt-get update
apt-get install -y kubelet kubeadm kubectl
At this point you should have all the tools you need, so you should be ready to go ahead and actually deploy a k8s cluster.
Now that the Kubeadm installation is complete, we’ll go ahead and create a new cluster using kubeadm init. Part of this process is choosing a network provider, and there are several choices; we’ll use Calico for this kubeadm init example.
Create the actual cluster. For Calico, we need to add the –pod-network-cidr switch as command line arguments to kubeadm init, as in:
kubeadm init --pod-network-cidr=192.168.0.0/16
This will crank for a while, eventually giving you output something like this:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
.
.
.
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.2.15:6443 --token ... \
--discovery-token-ca-cert-hash sha256:...
Prepare your system for adding workloads, including the network plugin. Open a NEW terminal window and execute the commands kubeadm gave you:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install the Tigera Calico operator and custom resource definitions. (from https://docs.projectcalico.org/getting-started/kubernetes/quickstart)
kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
Install Calico by creating the necessary custom resource.
kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
Check to see if the pods are running:
kubectl get pods --all-namespaces
The pods will start up over a short period of time. Untaint the master so that it will be available for scheduling workloads:
kubectl taint nodes --all node-role.kubernetes.io/master-
Confirm that you now have a node in your cluster with the following command.
kubectl get nodes -o wide
It should return something like the following.
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
maquiloca Ready control-plane,master 33m v1.20.2 10.0.2.15 <none> Ubuntu 18.04.5 LTS 5.4.0-42-generic docker://19.3.6
Drain the node
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
Revert changes made by kubeadm init
kubeadm reset