Self Hosted k8s: cannot initialize workspace: cannot connect to workspace daemon v0.6.0

When creating a new workspace, the workspace automatically terminates and returns the following message:

{“error”:“open /workspace/.gitpod/content.json: no such file or directory”,“level”:“info”,“message”:“no content init descriptor found - not trying to run it”,“serviceContext”:{“service”:“supervisor”,“version”:""},“severity”:“INFO”,“time”:“2021-01-19T20:05:21Z”}
{“error”:“Get “http://localhost:23000/”: dial tcp 127.0.0.1:23000: connect: connection refused “,“level”:“info”,“message”:“IDE is not ready yet”,“serviceContext”:{“service”:“supervisor”,“version”:””},“severity”:“INFO”,“time”:“2021-01-19T20:05:21Z”}
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell

Cannot Initialize workspace: cannot connect to workspace daemon.

Has anyone seen this issue before? If so, what did you do to fix it?

Could it be that it’s this bug? https://github.com/gitpod-io/gitpod/issues/2956#issue-787997512

Could you tell me more about your setting? How did you setup Kubernetes? How many nodes to you have? On which node is ws-manager, on which is ws-daemon / your workspace?

1 Like

Hi @corneliusludmann,

It might be. I created a ws-daemon service and bound it to any ws-daemon pod.

apiVersion: v1
kind: Service
metadata:
  name: ws-daemon
  labels:
    gitpod.io/nodeService: ws-daemon
spec:
  ports:
  - port: 8080
    protocol: TCP
  selector:
    gitpod.io/nodeService: ws-daemon

Then I changed the ws-manager-config configmap value to point to the ws-daemon service’s local ip i.e. 10.111.242.XXX instead of ws-daemon. Restarted the ws-manager pod. Not sure if this helped though.

I also hit this error with my self hosted setup.

Current Setup:
3 K8s nodes
1 Control Plane
2 Worker nodes

ws-daemon is on both worker nodes
ws-manager is on the node 102