Single Control-Plane Cluster with Kubernetes
Objective: Install Kubernetes on a Single Machine divided into Three Virtual Box Nodes with Single Control-Plane.
References:
- https://medium.com/@KevinHoffman/building-a-kubernetes-cluster-in-virtualbox-with-ubuntu-22cd338846dd
- https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
- https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
- https://docs.projectcalico.org/getting-started/kubernetes/quickstart
- https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
Part 1: Setup VirtualBoxes
- Install Ubuntu 18.04 LTS on your machine.
- Install VirtualBox on the machine
- Create a VirtualBox VM with Ubuntu 18.04 on it, with 2-4 GB of RAM and 2-4 CPU cores.
- Go to File->Host Network Manager and create a Host Network called vboxnet0 with IPv4 Address: 192.168.99.1 and IPv4 Network Mask: 255.255.255.0
- Set Network Adapter #2 of the VirtualBox VM to Host-only Adapter and set name to vboxnet0.
- Install Docker CE in the VirtualBox VM
- Turn the swap disk off (sudo swapoff -a and comment out line with swap in /etc/fstab)
- sudo apt update && sudo apt install -y openssh-server net-tools
- Power down the VirtualBox VM
- Clone the VirtualBox 3 times (Full clone, not linked). Name one kubemaster, one worker1, and one worker2.
- Change worker1 and worker 2 to 1 cpu core and 2 GB memory if you wish depending on your resources. Kubemaster must have at least 2 cores.
- Startup all three VMs (kubemaster, worker1, worker2). You should have one left as backup in case something goes wrong.
- Change /etc/hostname in each of the VM’s to its respective name (kubemaster, worker1, worker2)
- Ifconfig -a and check to see the name of the host-only adapter (e.g. enp0s8)
- Add to the bottom of /etc/network/interfaces in each VM, where # is 0,1,2 for kubemaster, worker1, and worker2. Also be sure to substitute your name for the adapter in enp0s8 if it is different:
auto enp0s8
iface enp0s8 inet static
address 192.168.99.2#
netmask 255.255.255.0
network 192.168.99.0
broadcast 192.168.99.255
- Do a “sudo ufw disable” in each of the VMs and then reboot each.
Part 2: Install kubeadm, kubelet, and kubectl on each VM.
- Create a file /etc/sysctl.d/k8s.conf and put the following lines in it for each VM:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
- Run the following commands on each VM:
modprobe br_netfilter
sysctl --system
- Perform the following commands for each VM, the last 4 commands may not work, but that’s OK:
sudo apt-get install -y iptables arptables ebtables
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
sudo update-alternatives --set arptables /usr/sbin/arptables-legacy
sudo update-alternatives --set ebtables /usr/sbin/ebtables-legacy
- Perform the following commands for each VM.
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Part 3: Install Calico (or some Pod network add-on) and join the worker nodes to setup the cluster.
- In the kubemaster VM only use the following commands:
sudo kubeadm init—apiserver-advertise-address=192.168.99.20 --pod-network-cidr=192.168.0.0/16
- Copy the command shown to join nodes and save it somewhere.
- In the kubemaster VM only, use the following commands:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
watch kubectl get pods --all-namespaces
- Wait till all pods are in “Running” status and then hit ctrl-C
- In the kubemaster VM only, use the following commands:
kubectl taint nodes --all node-role.kubernetes.io/master-
- In each of the worker nodes execute the command copied in step 2.
- You could reboot the machines at this point, I did, but it’s not necessary.
Part 4: Test the Cluster
- In the kubemaster, type:
kubectl get nodes -o wide
You should see 3 nodes, 1 master (kubemaster), and 2 worker nodes (worker 1 and worker 2)
- Create a yaml file with the following contents:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 9
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
- Enter the following commands in kubemaster:
kubectl create -f <filename of yaml in step 2>
kubectl get pods -o wide
- You should see all the pods running (9 pods) on different nodes.
- To delete the deployment:
kubectl delete deploy --all