Menu

Data Science Dev

Design, develop, deploy

Single Control-Plane Cluster with Kubernetes

 

Single Control-Plane Cluster with Kubernetes

Objective: Install Kubernetes on a Single Machine divided into Three Virtual Box Nodes with Single Control-Plane.

References:

  1. https://medium.com/@KevinHoffman/building-a-kubernetes-cluster-in-virtualbox-with-ubuntu-22cd338846dd
  2. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
  3. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
  4. https://docs.projectcalico.org/getting-started/kubernetes/quickstart
  5. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

 

Part 1: Setup VirtualBoxes

  1. Install Ubuntu 18.04 LTS on your machine.
  2. Install VirtualBox on the machine
  3. Create a VirtualBox VM with Ubuntu 18.04 on it, with 2-4 GB of RAM and 2-4 CPU cores.
  4. Go to File->Host Network Manager and create a Host Network called vboxnet0 with IPv4 Address: 192.168.99.1 and IPv4 Network Mask: 255.255.255.0
  5. Set Network Adapter #2 of the VirtualBox VM to Host-only Adapter and set name to vboxnet0.
  6. Install Docker CE in the VirtualBox VM
  7. Turn the swap disk off (sudo swapoff -a and comment out line with swap in /etc/fstab)
  8. sudo apt update && sudo apt install -y openssh-server net-tools
  9. Power down the VirtualBox VM
  10. Clone the VirtualBox 3 times (Full clone, not linked).  Name one kubemaster, one worker1, and one worker2.
  11. Change worker1 and worker 2 to 1 cpu core and 2 GB memory if you wish depending on your resources.  Kubemaster must have at least 2 cores.
  12. Startup all three VMs (kubemaster, worker1, worker2).  You should have one left as backup in case something goes wrong.
  13. Change /etc/hostname in each of the VM’s to its respective name (kubemaster, worker1, worker2)
  14. Ifconfig -a and check to see the name of the host-only adapter (e.g. enp0s8)
  15. Add to the bottom of /etc/network/interfaces in each VM, where # is 0,1,2 for kubemaster, worker1, and worker2.  Also be sure to substitute your name for the adapter in enp0s8 if it is different:

auto enp0s8
iface enp0s8 inet static
address 192.168.99.2#
netmask 255.255.255.0
network 192.168.99.0
broadcast 192.168.99.255

 

  1.  Do a “sudo ufw disable” in each of the VMs and then reboot each.

 

Part 2: Install kubeadm, kubelet, and kubectl on each VM.

  1. Create a file /etc/sysctl.d/k8s.conf and put the following lines in it for each VM:

          net.bridge.bridge-nf-call-ip6tables = 1

          net.bridge.bridge-nf-call-iptables = 1

  1. Run the following commands on each VM:

          modprobe br_netfilter

          sysctl --system

  1. Perform the following commands for each VM, the last 4 commands may not work, but that’s OK:

sudo apt-get install -y iptables arptables ebtables

sudo update-alternatives --set iptables /usr/sbin/iptables-legacy

sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy

sudo update-alternatives --set arptables /usr/sbin/arptables-legacy

sudo update-alternatives --set ebtables /usr/sbin/ebtables-legacy

  1. Perform the following commands for each VM.

sudo apt-get update && sudo apt-get install -y apt-transport-https curl

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list

deb https://apt.kubernetes.io/ kubernetes-xenial main

EOF

sudo apt-get update

sudo apt-get install -y kubelet kubeadm kubectl

sudo apt-mark hold kubelet kubeadm kubectl

 

Part 3: Install Calico (or some Pod network add-on) and join the worker nodes to setup the cluster.

  1. In the kubemaster VM only use the following commands:

           sudo kubeadm init—apiserver-advertise-address=192.168.99.20 --pod-network-cidr=192.168.0.0/16

  1. Copy the command shown to join nodes and save it somewhere.
  2. In the kubemaster VM only, use the following commands:

           mkdir -p $HOME/.kube

           sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

           sudo chown $(id -u):$(id -g) $HOME/.kube/config

           kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

           watch kubectl get pods --all-namespaces

  1. Wait till all pods are in “Running” status and then hit ctrl-C
  2. In the kubemaster VM only, use the following commands:

           kubectl taint nodes --all node-role.kubernetes.io/master-

  1. In each of the worker nodes execute the command copied in step 2.
  2. You could reboot the machines at this point, I did, but it’s not necessary.

 

Part 4: Test the Cluster

  1. In the kubemaster, type:

kubectl get nodes -o wide

                You should see 3 nodes, 1 master (kubemaster), and 2 worker nodes (worker 1 and worker 2)

  1. Create a yaml file with the following contents:

apiVersion: apps/v1

kind: Deployment

metadata:

name: nginx-deployment

labels:

  app: nginx

spec:

  replicas: 9

  selector:

    matchLabels:

    app: nginx

template:

  metadata:

  labels:

    app: nginx

  spec:

    containers:

    - name: nginx

       image: nginx:1.14.2

    ports:

    - containerPort: 80

  1. Enter the following commands in kubemaster:

kubectl create -f <filename of yaml in step 2>

kubectl get pods -o wide

  1. You should see all the pods running (9 pods) on different nodes.
  2. To delete the deployment:

kubectl delete deploy --all

Go Back

Comment

Blog Search

Comments

There are currently no blog comments.