Create a Kubernetes Cluster using kubeadm on CentOS 7


Requirements

I’m writing the requirements here, but will go through all the steps below.

  • https://kubernetes.io/docs/setup/independent/install-kubeadm/
  • Operating System: CentOS 7
  • 2 GB or more of RAM per machine (any less will leave little room for your apps)
  • 2 CPUs or more
  • Full network connectivity between all machines in the cluster (public or private network is fine)
  • Unique hostname, MAC address, and product_uuid for every node. See here for more details.
  • Certain ports are open on your machines. See here for more details.
  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.

My Setup

I created a quick VPC called itsmetommy and three VMs within GCP.

  • VPC: itsmetommy 10.0.0.0/9
    • Range: 10.0.0.0 – 10.127.255.255
    • IPs: 10.0.0.1 – 10.127.255.254
    • Hosts: 8388606
  • k8s Cluster
    • 10.244.0.0/16
    • Range: 10.244.0.0 – 10.244.255.255
    • IPs: 10.244.0.1 – 10.244.255.254
    • Hosts: 65534
  • OS: CentOS 7
  • Machine Type: n1-standard-2 (2 vCPUs, 7.5 GB memory)
  • Hostnames
    • k8s-itsmetommy-master
    • k8s-itsmetommy-worker-1
    • k8s-itsmetommy-worker-2

Install

Run on every node.

$ sudo bash
# yum -y update

Verify the MAC address and product_uuid are unique for every node.

# ifconfig -a
# cat /sys/class/dmi/id/product_uuid

Disable swap.

# swapoff -a

Disable the firewall.

# systemctl disable firewalld && systemctl stop firewalld

I created a firewall rule that allowed port 22, along with an internal firewall rule with all the recommended ports.

Configure iptables to receive bridged network traffic.

# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

Load system configuration files.

# sysctl --system

Update SELinux.

# setenforce 0
# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Install Docker.

# yum -y install docker
# systemctl enable docker && systemctl start docker

Add Kubernetes YUM repository.

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Install kubelet, kubeadm and kubectl.

# yum install -y kubelet kubeadm kubectl
# systemctl enable kubelet && systemctl start kubelet

Create cluster

Run on master.

$ sudo kubeadm init --pod-network-cidr 10.244.0.0/16

Example

$ sudo kubeadm init --pod-network-cidr 10.244.0.0/16
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-itsmetommy-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.2]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-itsmetommy-master localhost] and IPs [10.0.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-itsmetommy-master localhost] and IPs [10.0.0.2 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.502512 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-itsmetommy-master" as an annotation
[mark-control-plane] Marking the node k8s-itsmetommy-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-itsmetommy-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: i6rmg7.9b4i2eyl06ru6mqp
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.0.0.2:6443 --token i6rmg7.9b4i2eyl06ru6mqp --discovery-token-ca-cert-hash sha256:a3ddcd2aaa87e7ea9de096dcb24e028e68f3e309ca634df0f278c059efd88527

IMPORTANT: Make a record of the kubeadm join command that kubeadm init outputs. You need this command to join nodes to your cluster.

Run on master as a regular user.

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Deploying Container Networking Interface (CNI)

The Container Network Interface (CNI) defines how the different nodes and their workloads should communicate. There are multiple network providers available, some are listed here.

I went with Flannel.

Run on master — run apply.

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

Run on master — verify all pods are in running state.

$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-5r97f 1/1 Running 0 6m58s
kube-system coredns-86c58d9df4-lxrzr 1/1 Running 0 6m58s
kube-system etcd-k8s-itsmetommy-master 1/1 Running 0 11m
kube-system kube-apiserver-k8s-itsmetommy-master 1/1 Running 0 11m
kube-system kube-controller-manager-k8s-itsmetommy-master 1/1 Running 0 11m
kube-system kube-flannel-ds-amd64-hpbrt 1/1 Running 0 24s
kube-system kube-proxy-8hdh5 1/1 Running 0 12m
kube-system kube-scheduler-k8s-itsmetommy-master 1/1 Running 0 11m

Configure worker nodes

Run on both worker-1 and worker-2.

$ sudo kubeadm join 10.0.0.2:6443 --token i6rmg7.9b4i2eyl06ru6mqp --discovery-token-ca-cert-hash sha256:a3ddcd2aaa87e7ea9de096dcb24e028e68f3e309ca634df0f278c059efd88527

Example

$ sudo kubeadm join 10.0.0.2:6443 --token i6rmg7.9b4i2eyl06ru6mqp --discovery-token-ca-cert-hash sha256:a3ddcd2aaa87e7ea9de096dcb24e028e68f3e309ca634df0f278c059efd88527
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "10.0.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.0.0.2:6443"
[discovery] Requesting info from "https://10.0.0.2:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.0.0.2:6443"
[discovery] Successfully established connection with API Server "10.0.0.2:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-itsmetommy-worker-1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

Run on master — get nodes.

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-itsmetommy-master Ready master 15m v1.13.1
k8s-itsmetommy-worker-1 Ready <none> 29s v1.13.1
k8s-itsmetommy-worker-2 NotReady <none> 5s v1.13.1

Run on master — view tokens.

$ sudo kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
i6rmg7.9b4i2eyl06ru6mqp 22h 2018-12-29T20:27:36Z authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token

Run on master — label the worker nodes (optional).

$ kubectl label node k8s-itsmetommy-worker-1 node-role.kubernetes.io/node=
node/k8s-itsmetommy-worker-1 labeled
$ kubectl label node k8s-itsmetommy-worker-2 node-role.kubernetes.io/node=
node/k8s-itsmetommy-worker-2 labeled

Run on master — get nodes.

Notice the ROLES column and how the workers are now labeled node.

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-itsmetommy-master Ready master 173m v1.13.1
k8s-itsmetommy-worker-1 Ready node 158m v1.13.1
k8s-itsmetommy-worker-2 Ready node 157m v1.13.1

Remove cluster

$ kubeadm reset
, ,