Error "Either you are offline or websocket connections are blocked"

After getting past some stack creating errors, I ran make auth for Github.com and that ran fine
image

But now when I try to pull up the site I’m getting “Either you are offline or websocket connections are blocked”

image

What could be the issue?

hmm…when I try to hit the site again, I get

image

and looking at the target group I see

image

kubectl get pods shows me
image

kubectl logs server-bfcc9bc47-kbxnc -c server shows me

so it seems the callback url is incorrect? But I updated auth-providers-patch.yaml with

  # Please check the documentation for details the expected format
  # https://www.gitpod.io/docs/self-hosted/0.5.0/install/oauth
  auth-providers.json: |
    [
      {
        "id": "Public-GitHub",
        "host": "github.com",
        "type": "GitHub",
        "oauth": {
          "clientId": "*************",
          "clientSecret": "************************",
          "callBackUrl": "https://gitpod.mydomain.com/auth/github/callback",
          "settingsUrl": "https://github.com/settings/applications/*******"
        },
        "description": "",
        "icon": ""
      }
    ]

so I did make uninstall so I could start fresh and also so I could provision a m6i.large instance.

I was able to provision the cluster (working around the create stack errors I ran into in the other thread), and get to the “Welcome to Gitpod” page. Then I run make auth

image

and I get this same error about websock connections are blocked.

I run kubectl describe configmap auth-providers-config and get

image

but those are not the values I have in auth-providers-patch.yaml.

so it seems like there is a problem where make auth is not updating the auth-providers-config as expected?

Looks like the server is just continually restarting.

Any ideas?

Looking at setup.sh I see that make auth runs

    # Patching the configuration with the user auth provider/s
    kubectl --kubeconfig .kubeconfig patch configmap auth-providers-config --type merge --patch "$(cat ${AUTHPROVIDERS_CONFIG})"
    # Restart the server component
    kubectl --kubeconfig .kubeconfig rollout restart deployment/server

Which runs kubectly with the .kubeconfig file created in the local directory by make. I thought I would try running that with those commands using my ~/.kube/config/ file so I ran

and it’s working

image

Well, at least I’m past the websocket connection error and the pods are running normally and I see the config map was updated as expected.

image

…and I was able to log in with Github Oauth and get to the workspaces screen

image