Gitpod proxy vs kubernetes ingress controller and ingress

What is the reasoning behind the proxy setup for Gitpod rather than using a standard ingress controller and defining ingresses for the services to be accessed?

I have a RKE cluster which I have created with the default ingress controller disabled so that ports 80 and 443 are available for the proxy services.

The L4 load balancer is stuck as pending and I am unable to access the Gitpod setup. I am guessing this is because there are no public endpoints defined for the RKE cluster.

I have tried setting all the public IPs of the cluster in DNS for the DNS names I need - but this does not help. I think I need to configure something on the cluster so that the loadbalancer completes its setup. It’s almost as if the defined selectors for the LB are not being matched.

Any ideas?

Should I have an external LB in front of this, pointing to the 3 nodes?

Hi Thomas,

we use the custom proxy setup to implement a host of request processing along the way: e.g. authentication, header filtering, and primarily dynamic routing of requests to workspaces and their ports.

By default Gitpod Self-Hosted does not set a loadBalancerIP and expects Kubernetes to assign the LB service a public IP in that case. I’m not all too familiar with RKE, but it would seem that that’s not happening (as you say yourself, there are no public endpoints, hence the L4 LB not coming up).

Could you try manually assigning a public endpoint by configuring an IP address in components.proxy.loadBalancerIP in the values.yaml?

I think I have tried this - without luck.

Would the IP just be one of the nodes in the cluster? Currently all three nodes have the same roles (controlpane, etcd and worker).

I have tried using the IP address that the cluster API is accessible on without any luck.

I usually just have the ingress controller setup - and can easily access any of the nodes in an RKE cluster, so I am guessing the ingress controller setup from RKE is opening up something that we need here.

It would be the loadBalancerIP as described in the Kubernetes docs:

Some cloud providers allow you to specify the loadBalancerIP . In those cases, the load-balancer is created with the user-specified loadBalancerIP . If the loadBalancerIP field is not specified, the loadBalancer is set up with an ephemeral IP address. If you specify a loadBalancerIP but your cloud provider does not support the feature, the loadbalancerIP field that you set is ignored.

RKE supports L4 and L7 load balancer, the use of which depends on the cloud provider you’re hosting RKE on.

If the load balancer setup does not work for you, you could set up Gitpod to provide a ClusterIP service by setting components.proxy.serviceType to ClusterIP in the values.yaml. Then you could provide your own ingress providing access to this cluster-internal service.

ok - interesting. Will try and get that going.

Good dey, any Updates? Thanks Björn

Unfortunately there is no such thing as a standard ingress controller (if there was, the Red Hat would have not felt the need to fork K8s and create their own Router Controller concept and Rancher wouldn’t have had the need to make their L7 ingress controller to which I believe you refer). Kubernetes lets you expose a service through ClusterIP, NodePort, and LoadBalancer services. Most applications I’ve come across deployed by Helm ship with a L7 ingress pod (such as nginx proxy) and rely on the operator to configure the L4 ingress. We have both Gitpod and Harbor Registry deployed and they have a similar architecture where the L7 ingress is deployed with the Helm chart as a nginx pod but the operator is required to chose how to expose that service.

We have RKE deployed on AWS-compatible infrastructure and it deploys a L4 LoadBalancer (ELB) which connects to Gitpod to expose the service. In an RKE setup which is not cloud enabled (such as normal VMware environment) you will be required to deploy an ingress controller (you can chose from many in the Rancher catalog) and through the use of K8s labeling you can have the routes connected to the Gitpod application (never actually done this, but it’s what the documentation says it can do)