Kubernetes-The Hard Way With Docker & Flannel (Part 2)
Welcome back to “Kubernetes-The Hard Way With Docker & Flannel” series part 2. In previous post we have provised compute resource, generated certificates and kubeconfig files. In this post, we will install/configure controller nodes
6. Bootstrapping the etcd Cluster
etcd is a consistent and highly-available key value storage DB. Kubernetes stores all cluster data in etcd via api-server. In this section we will install and configure etcd on all controller nodes.
*NOTE: The below commands must run on all controller nodes
*TIP: You can use tumx to run command on multiple nodes at same time
Set up the following environment variables which are usefull generate etcd systemd unit file
Create systemd unit file
Start the etcd service
Once etcd installation and configuration done in all controller nodes, verify that etcd cluster is working properly
Move kube-scheduler kubeconfig to kubernetes directory
Create kube-scheduler configuration file
Create kube-scheduler systemd unit file
Start the controller services
Enable HTTP Health Checks
In original “Kubernetes The Hard Way”, Kelsey used GCP load balancer to load balance the requests among controllers. Since it is difficult to setup HTTPS health checks on GCP network load balancer and kube-apiserver supports only HTTPS health check. He created HTTP nginx proxy for kube-api server, GCP network load balancer perform health check via HTTP nginx proxy. But in our case, we can skip this step since we are not using GCP network load balancer
Check the components status using below commands.
Run above command on all controller nodes and verify statuses which should like below
RBAC for Kubelet Authorization
In this section we will configure RBAC permissions to allow the kube-api server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
Create the system:kube-apiserver-to-kubelet ClusterRole with permissions to access the Kubelet.
The kube-api server authenticates to the Kubelet as the “kubernetes” user using the client certificate as defined by the --kubelet-client-certificate flag which have defined in kube-apiserver systemd unit file above.
Bind the system:kube-apiserver-to-kubelet ClusterRole to the kubernetes user:
The Kubernetes Frontend Load Balancer
As I said earlier, we are not going using GCP load network load balancer, but we are going using nginx docker container on host(Laptop) to load balance the requests.
In this section, we will build nginx docker image with appropriate configuration to load balance requests among controller nodes(m1 and m2)
Specify controllers IPs with kube-api server’s port in nginx configuration like below
Create Dockerfile to build nginx load balancer docker image
Build and launch the container
curl the HTTPS endpoint of load balancer(nginx docker container) which forwards the requests to controller node with certificate.
If everything is good, you should see output like below.
In this post, we have successfully provisioned controller nodes and load balancer. We will bootstrap the worker nodes in next post