Kubernetes Part 3: Install Kubernetes on a Rasberry Pi 4

 



In my previous post. We have prepared the Rasperry Pi's with Ubuntu LTS 20.04 and Docker, so now we are going to install Kubernetes. One of the Raspberry Pi's (k8s-master) will function as a master node and the others will configerd as worker nodes. The master nodes requires some additional steps, which will be explained in this blog, so let's continue

Actions for Master and Worker nodes

- Add Kubernetes repository

Add GPG key for security (to make sure you connect to the legit repository) with the following command

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

Add the repository:

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

- Install required Kubernetes packages

sudo apt update
sudo apt install kubeadm kubectl kubelet

Note: If you get errors with the first command, wait a few minutes and try again.




Actions for the Master node only:

The following actions are only required for Raspberry Pi that will function as a master node:

-Initialize Kubernetes with following command

 sudo kubeadm init --pod-network-cidr=10.244.0.0/16

Once this runs, you will get some output that will include the join command, but don't join nodes yet. Copy this somewhere for later. It should look somthing like kubeadm join 192.168.1.204:6443 --token qt57zu.wuvqh64un13trr7x --discovery-token-ca-cert-hash sha256:5ad014cad868fdfe9388d5b33796cf40fc1e8c2b3dccaebff0b066a0532e9823

- Set up config directory

 mkdir -p ~/.kube
 sudo cp /etc/kubernetes/admin.conf ~/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

Go ahead and run those, but if it recommends different commands, run those instead.

- Install flannel network driver

 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Note: The lack of sudo is intentional

Once all of the pods have come up, run the join command on each worker node. 


Actions for all Workers nodes:

Log on the each worker node via SSH (Putty) and enter the command you had copied earlier. Like 

kubeadm join 192.168.1.204:6443 --token qt57zu.wuvqh64un13trr7x --discovery-token-ca-cert-hash sha256:5ad014cad868fdfe9388d5b33796cf40fc1e8c2b3dccaebff0b066a0532e9823

Remark: red are example values values. 

- Check status of nodes
See if the nodes have joined successfully, run the following command a few times until everything is ready:

 kubectl get nodes

You should see something similar like, then your kubernetes cluster is up and running


TIP: If you have only one Rasberry Pi to test Kubernetes and no worker nodes, you need to run the command. This allows standard pods running on the master node:

kubectl taint nodes --all node-role.kubernetes.io/master-

if you have a more recent version of kubernetes running the command is:

kubectl taint nodes --all node-role.kubernetes.io/control-plane-


Remark: Only use the commands above if you have no worker nodes installed.


- Create a test pod

You can create a test pod to see if kubernetes if working properly by creating a test pod. To create a test pod do the following:

nano pod.yml

Copy and paste to contact below and save the file (Ctrl-O, Ctrl-X)

 apiVersion: v1
 kind: Pod
 metadata:
   name: nginx-example
   labels:
     app: nginx
 spec:
   containers:
     - name: nginx
       image: linuxserver/nginx
       ports:
         - containerPort: 80
           name: "nginx-http"
---
 apiVersion: v1
 kind: Service
 metadata:
   name: nginx-example
 spec:
   type: NodePort
   ports:
     - name: http
       port: 80
       nodePort: 30080
       targetPort: nginx-http
   selector:
     app: nginx


bectl ap

Apply the pod yaml file via the command

 kubectl apply -f pod.yml


Check the status with:

kubectl get pods

Check the status with more info:

kubectl get pods  -o wide

Check the status of the service

kubectl get service

You should see something similar to this (yellow line)

Bijschrift toevoegen





You can access the nginx example via one the IP address you have given you Raspberry Pi via poort 30080.  (for example http://192.168.1.201:30080). You should the following website




 
If the test has run ok, you delete the test pod via the following command:

 kubectl delete -f pod.yml

In the Part 4 of the Kubernetes series the installation of the MetalLB load balancer will be explained.

Remark:
The blog is based on this wiki with some minor adjustmenst. If you run into issues don't hesitate to leave a comment. I would also recommend to watch this video video.

Comments

  1. Got an error, everything was going so well :)
    kubectl apply -f pod.yml
    error: error parsing pod.yml: error converting YAML to JSON: yaml: line 2: mapping values are not allowed in this context

    ReplyDelete
    Replies
    1. Anonymous9:46 PM

      Thanks for the reply. It have tested it and had the same problem. Seems like there was a space somewhere in yaml file that not be. It should be working now, so you can try again.

      Delete
  2. I still had to remove the space from the start of each line, but it now works. Thank you!!

    ReplyDelete
    Replies
    1. Anonymous8:25 AM

      I am thinking about placing these script on my Github account, so you can download them. Yaml files can be very picky on the syntax.

      Delete
    2. Anonymous2:34 PM

      The example files now be download from Github at

      https://github.com/erik-de-bont/blog/tree/main/kubernetes

      Delete
  3. Every 2.0s: kubectl get pods --all-namespaces k8s-master: Sat Apr 16 01:09:01 2022
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-64897985d-2rwqv 0/1 Pending 0 29m
    kube-system coredns-64897985d-xldk7 0/1 Pending 0 29m
    kube-system etcd-k8s-master 1/1 Running 0 29m
    kube-system kube-apiserver-k8s-master 1/1 Running 0 29m
    kube-system kube-controller-manager-k8s-master 1/1 Running 0 29m
    kube-system kube-flannel-ds-jg92l 0/1 Init:ImagePullBackOff 0 20m
    kube-system kube-flannel-ds-jtt6t 0/1 Init:ImagePullBackOff 0 15m
    kube-system kube-flannel-ds-rj8kd 0/1 Init:ImagePullBackOff 0 15m
    kube-system kube-proxy-5295r 1/1 Running 0 29m
    kube-system kube-proxy-7g8sp 1/1 Running 0 15m
    kube-system kube-proxy-dsn4l 1/1 Running 0 15m
    kube-system kube-scheduler-k8s-master 1/1 Running 0 29m


    sudo kubeadm join 192.168.1.222:6443 --token xxxxxx --discovery-token-ca-cert-hash sha256:yyyyyyy
    [preflight] Running pre-flight checks
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    W0416 00:52:58.871955 4049 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.

    ReplyDelete
    Replies
    1. Events:
      Type Reason Age From Message
      ---- ------ ---- ---- -------
      Normal Scheduled 26m default-scheduler Successfully assigned kube-system/kube-flannel-ds-jg92l to k8s-master
      Warning Failed 25m (x2 over 25m) kubelet Failed to pull image "rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1": failed to resolve reference "docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1": failed to do request: Head "https://registry-1.docker.io/v2/rancher/mirrored-flannelcni-flannel-cni-plugin/manifests/v1.0.1": dial tcp 54.156.13.77:443: connect: no route to host
      Warning Failed 24m kubelet Failed to pull image "rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1": failed to resolve reference "docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1": failed to do request: Head "https://registry-1.docker.io/v2/rancher/mirrored-flannelcni-flannel-cni-plugin/manifests/v1.0.1": dial tcp 52.200.78.26:443: connect: no route to host
      Normal Pulling 23m (x4 over 26m) kubelet Pulling image "rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1"
      Warning Failed 23m (x4 over 25m) kubelet Error: ErrImagePull
      Warning Failed 23m kubelet Failed to pull image "rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1": failed to resolve reference "docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1": failed to do request: Head "https://registry-1.docker.io/v2/rancher/mirrored-flannelcni-flannel-cni-plugin/manifests/v1.0.1": dial tcp 34.237.244.67:443: connect: no route to host
      Warning Failed 22m (x6 over 25m) kubelet Error: ImagePullBackOff
      Normal BackOff 71s (x93 over 25m) kubelet Back-off pulling image "rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1"

      Delete
    2. ignore, i misconfigured proxy

      Delete
    3. I thought it would something like that. Great that it is working now! Good luck with the configuration of your kubernetes cluster.

      Delete

Post a Comment