One of the issues I encountered early on in migrating my Docker Swarm workloads to Kubernetes on GKE, was how to reliably permit inbound traffic into the cluster.
This is a companion discussion topic for the original entry at https://geek-cookbook.funkypenguin.co.nz/kubernetes/loadbalancer/
I used Keepalived for this. Why did you choose HAProxy? Just wanted to understand before I change my setup
In my case, I only have 1 external VM for HAProxy, although I might add some GCP HA elements in future.
When an incoming request comes into the HAProxy VM on port 443 (for example), it’s sent the the IP of the Kubernetes node running Traefik, on port 30443.
Can keepalived do this port-converting magic?
Hmm, I thought the port mapping was what the ingresses were for… But I see that you are also terminating the SSL at the HAProxy level. Currently, that goes all the way to Traefik for me which means that I need certs for all of the different endpoints (basically like you had done for the swarm).
But yes, one of the reasons I used Keepalived was for HA.I have keepalived for the masters for the control plane, then also for the workers for the services.
Since Kubernetes is really designed for clouds with things like ELBs and ALBs, this is a critical problem to solve for my bare metal deploy which is why I’m poking on it
Ah, now that I think about it, I did run into the ip-per-service problem. I was setting up email in the cluster and had to resort to a separate ip address for my mail traffic vs. my http traffic. Okay, I guess I need to do keepalived+haproxy to solve my problem?
Yes, the ip-per-service thing is a PITA
I don’t actually terminate SSL on HAProxy BTW, its primary functions are:
To allow me to use “real world” ports (like 443) while still using (free) NodePorts (30000+) in the cluster, and…
“finding” my cluster nodes IPs for NodePort, which have unpredictable IP addresses
I use keepalived along with k3s’s inbuilt load-balancer support and Traefik 2, which - while it probably wouldn’t work well for a cloud cluster, since it depends on doing your own network-wrangling - works great for a local one.
Short version: I proxy everything through Traefik, http, raw tcp, or udp. Since Traefik is defined as a LoadBalancer-type service in this configuration, k3s automatically spawns a DaemonSet of svclb-traefik-* pods which ensure that the exposed ports (25, 443, 53, etc.) are available on every node and forwarded to Traefik; and then I have keepalived set up to create a cluster address (172.16.0.140) which is always forwarded to a cluster node (172.16.0.141-149) that’s up.
And voila, single (redundant) cluster address with arbitrary number of service ports!
Nice, that’s a great solution! I’m using a small variation on a bare-metal cluster with metallb (because my firewall is BGP-capable) instead of keepalived, but your solution is simple, redundant and robust!