My previous post the Kubernetes installation ended with a nodeport configuration as a final step to access a pod (container) from outside the kubernetes network.
In Kubernetes there are several different port configurations for Kubernetes services:
- Port exposes the Kubernetes service on the specified port within the cluster. Other pods within the cluster can communicate with this server on the specified port.
- TargetPort is the port on which the service will send requests to, that your pod will be listening on. Your application in the container will need to be listening on this port also.
- NodePort exposes a service externally to the cluster by means of the target nodes IP address and the NodePort. NodePort is the default setting if the port field is not specified.
Source: https://www.bmc.com/
Kubernetes installations at cloud providers like AWS, Azure Linode have all a loadbalancer included in their Kubernetes service., Users can access the website (hosted by multiple pod)s from one single IP, which is of course balanced, and your Rasberry Pi Kubernetes doesn't have that option.
There is however a great solution to have loadbalancing funtionality available with your newly build Rasberry Pi Kubernetes cluster, and that is the bare metal load balancer from MetalLB.
The link to their website is https://metallb.universe.tf .
MetalLB supports the use of the flanel network addon we have installed during the installation procedure of installing Kubernetes on a Rasberry Pi Cluster.
What you see in the post below are the excerpts form the installation procedure, which I have used to configure the MetalLB load balancer on my Pi Kubernetes cluster.
I have used the installation by manifest option. To start: type the following commands on the master node via an SSH session to the K8s-master :
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
- The components in the manifest are: The metallb-system/controller deployment. This is the cluster-wide controller that handles IP address assignments.
- The metallb-system/speaker daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable.
- Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function.
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.65-192.168.1.127 # < example ip range you reserve for MetalLB
nano metallb-config.yml
- copy & past the text for the yaml file with your own address values
- press Ctrl-V to past
- press Ctrl-O to save
- press Ctrl-X to exit
kubectl create -f metallb-config.yml
kubectl get configmap -n metallb-system
kubectl kubectl expose deployment nginx-1 --port 80 --type=LoadBalancer --name=nginx-1
kubectl get service
Thank you for your tutorial. I am lost at "kubectl expose deployment nginx-1 --port 80 --type=LoadBalancer --name=nginx-1" because in "Part 3" a "deployment" wasn't discussed as far as I see. 'Error from server (NotFound): deployments.apps "nginx-1" not found'. Same error when I replace "nginx-1" with "nginx-example" that was created in Part 3.
ReplyDeleteI have added the example for nginx-1. You can find it here
Deletehttps://github.com/erik-de-bont/blog/blob/main/kubernetes/part_4/nginx-1-example.yml
This works: kubectl expose service nginx-example --port 80 --type=LoadBalancer --name=nginx-example1
DeleteI just saw your post, thanks!
DeleteI set up my IPAddressPool wrong the first time around, I've since deleted and recreated it, but the service keeps getting an external IP from the first pool I created and not the active one. Is this cached somewhere that I can clear it out?
ReplyDeleteYou can type this command to check the log " kubetail -l app.kubernetes.io/component=speaker -n metallb-system". More info: https://metallb.universe.tf/configuration/troubleshooting/.
DeleteIf that doesn't help. I suggest to complety remove metallb via this command:
kubectl delete -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
Wait a few minutes so kubernetes can remove all metallb pods and re-apply the installation again via "kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml" and redo the configuration. I had to change it also in the past, and this procedure worked.
There's a few new things with MetalLB that are important to note:
ReplyDelete1. The new way of making a config is to declare an IP address pool and an L2 advertisement (for layer 2 mode). This can be seen here in the docs: https://metallb.universe.tf/configuration/#layer-2-configuration
2. If you're using a single node in your cluster with kubeadm, make sure you remove the label "exclude-from-external-load-balancers" on the control plane node. Not removing this makes it so your LB ip address isn't advertised to the network. I wasn't able to curl my load balancer because of this issue. The command to fix it is:
```
kubectl label nodes node.kubernetes.io/exclude-from-external-load-balancers-
```
This is a known issue and is being resolved: https://github.com/metallb/metallb/issues/2285