Self-Hosted k8s - Error during WebSocket handshake 302

Hi,

First off, I love Gitpod, I think it’s the best thing since sliced bread. Unfortunately though, when testing out the self-hosted version, I haven’t had much luck yet. I set up an EKS cluster, and used Helm (v3.2.1) to deploy it. The only values that I changed in the values.yaml is the hostname, and values in authProviders. The deployment is successful (All the pods come up fine), but when I try to access gitpod (I tried at the URL I set up, as well as directly to the ELB, and finally I tried port forwarding locally to hit it). The black screen starts to come up, but I get an error connecting “We are having trouble connecting to the server. Either you are offline or websocket connections are blocked.”. I’ve tried hitting it at /workspaces at well, that doesn’t work either. Has anyone ran into this issue?

Thank you so much!

Ken

Hi Ken,
please check your network settings in AWS, especially the firewall rules. I guess that the pods are not able to reach each other.
Best regards,
Wulf

Hi Wulf!

Thank you for the response! I went ahead and tried setting everything open (temporarily) to all ports, 0.0.0.0/0, but still the same issue. I also tried using kubectl to port-forward directly to the proxy service and had the same issue. The 302 error looks like a redirect, I think its trying to redirect ws://gitpod.adde.to/api/gitpod to ws://gitpod.adde.to, and at the point it fails to establish the Websocket connection?

Very strange! :confused:

Hi Ken,
are you using HTTP or HTTPS?

Could this be related to the issues we’ve been having @wulfthimm? In which case could this be fixed via the helm chart edits?

We are experiencing the same issue. Are there any solutions yet?

I have exact the same problem, when deploying gitpod with helm 3.0.3. on k8s 1.17.9.

The problem could not be related to firewall, since the first http request comes through and the websocket connection goes through the same port. All pods are looking good.

1 Like

I just ran the self hosted install on AWS and am running into the same issue when hitting /workspaces for the first time.