Data Science Dev

Design, develop, deploy

Single Control-Plane Cluster with Kubernetes


Single Control-Plane Cluster with Kubernetes

Objective: Install Kubernetes on a Single Machine divided into Three Virtual Box Nodes with Single Control-Plane.




Part 1: Setup VirtualBoxes

  1. Install Ubuntu 18.04 LTS on your machine.
  2. Install VirtualBox on the machine
  3. Create a VirtualBox VM with Ubuntu 18.04 on it, with 2-4 GB of RAM and 2-4 CPU cores.
  4. Go to File->Host Network Manager and create a Host Network called vboxnet0 with IPv4 Address: and IPv4 Network Mask:
  5. Set Network Adapter #2 of the VirtualBox VM to Host-only Adapter and set name to vboxnet0.
  6. Install Docker CE in the VirtualBox VM
  7. Turn the swap disk off (sudo swapoff -a and comment out line with swap in /etc/fstab)
  8. sudo apt update && sudo apt install -y openssh-server net-tools
  9. Power down the VirtualBox VM
  10. Clone the VirtualBox 3 times (Full clone, not linked).  Name one kubemaster, one worker1, and one worker2.
  11. Change worker1 and worker 2 to 1 cpu core and 2 GB memory if you wish depending on your resources.  Kubemaster must have at least 2 cores.
  12. Startup all three VMs (kubemaster, worker1, worker2).  You should have one left as backup in case something goes wrong.
  13. Change /etc/hostname in each of the VM’s to its respective name (kubemaster, worker1, worker2)
  14. Ifconfig -a and check to see the name of the host-only adapter (e.g. enp0s8)
  15. Add to the bottom of /etc/network/interfaces in each VM, where # is 0,1,2 for kubemaster, worker1, and worker2.  Also be sure to substitute your name for the adapter in enp0s8 if it is different:

auto enp0s8
iface enp0s8 inet static


  1.  Do a “sudo ufw disable” in each of the VMs and then reboot each.


Part 2: Install kubeadm, kubelet, and kubectl on each VM.

  1. Create a file /etc/sysctl.d/k8s.conf and put the following lines in it for each VM:

          net.bridge.bridge-nf-call-ip6tables = 1

          net.bridge.bridge-nf-call-iptables = 1

  1. Run the following commands on each VM:

          modprobe br_netfilter

          sysctl --system

  1. Perform the following commands for each VM, the last 4 commands may not work, but that’s OK:

sudo apt-get install -y iptables arptables ebtables

sudo update-alternatives --set iptables /usr/sbin/iptables-legacy

sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy

sudo update-alternatives --set arptables /usr/sbin/arptables-legacy

sudo update-alternatives --set ebtables /usr/sbin/ebtables-legacy

  1. Perform the following commands for each VM.

sudo apt-get update && sudo apt-get install -y apt-transport-https curl

curl -s | sudo apt-key add -

cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list

deb kubernetes-xenial main


sudo apt-get update

sudo apt-get install -y kubelet kubeadm kubectl

sudo apt-mark hold kubelet kubeadm kubectl


Part 3: Install Calico (or some Pod network add-on) and join the worker nodes to setup the cluster.

  1. In the kubemaster VM only use the following commands:

           sudo kubeadm init—apiserver-advertise-address= --pod-network-cidr=

  1. Copy the command shown to join nodes and save it somewhere.
  2. In the kubemaster VM only, use the following commands:

           mkdir -p $HOME/.kube

           sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

           sudo chown $(id -u):$(id -g) $HOME/.kube/config

           kubectl apply -f

           watch kubectl get pods --all-namespaces

  1. Wait till all pods are in “Running” status and then hit ctrl-C
  2. In the kubemaster VM only, use the following commands:

           kubectl taint nodes --all

  1. In each of the worker nodes execute the command copied in step 2.
  2. You could reboot the machines at this point, I did, but it’s not necessary.


Part 4: Test the Cluster

  1. In the kubemaster, type:

kubectl get nodes -o wide

                You should see 3 nodes, 1 master (kubemaster), and 2 worker nodes (worker 1 and worker 2)

  1. Create a yaml file with the following contents:

apiVersion: apps/v1

kind: Deployment


name: nginx-deployment


  app: nginx


  replicas: 9



    app: nginx




    app: nginx



    - name: nginx

       image: nginx:1.14.2


    - containerPort: 80

  1. Enter the following commands in kubemaster:

kubectl create -f <filename of yaml in step 2>

kubectl get pods -o wide

  1. You should see all the pods running (9 pods) on different nodes.
  2. To delete the deployment:

kubectl delete deploy --all

Go Back


Blog Search


There are currently no blog comments.