Kicking the tires on 0.10.0-alpha1

I’m testing out yesterday’s alpha release of GitPod Self Hosted 0.10.0. After applying all the chart template changes for the Private CA Certificates, I can successfully launch Gitpod on my Rancher Managed K8S cluster. When I start the workspace I get an error that says:

Oh, no! Something went wrong!
cannot initialize workspace: cannot initialize workspace: content initializer failed

Where should I look for logs to debug the issue?

Thanks!

1 Like

I am interested to hear how your Rancher k8s configuration is set up.

What does the config of your cluster nodes look like?

I am trying to do the same with a Rancher k3s setup, which almost worked for 0.9.0, but is now getting some cert errors during the helm install.

I get this error on ws-proxy, proxy and registry-facade:

{"@type":"type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent","error":"open /mnt/certificates/tls.crt: no such file or directory","level":"fatal","message":"cannot start proxy","serviceContext":{"service":"ws-proxy","version":""},"severity":"CRITICAL","time":"2021-07-21T07:12:44Z"}

There are some changes regarding the certs secret. See Gitpod Self-Hosted Upgrade Notes

You need now something like this:

Hi @corneliusludmann,

Was your reply meant for @thomashansen?

I did follow the information here: Update Self-Hosted docs for 0.10.0 · Issue #745 · gitpod-io/website · GitHub which seems to cover the change of cert key names. Are you saying that the change in cert names is causing the conten initializer to fail?

Thanks!
-b

Excellent - thanks @corneliusludmann !

1 Like

Did the first tests with 0.10.0 alpha1, works very well so far :slight_smile:

I saw 0.10.0 has been released yesterday, is it really ready? Probably the images are missing in GCR

1 Like

@cass Due to a bug in our release tool, the proper tagging of the images failed. I’m currently fixing this. See also agent-smith daemonset fails in 0.10.0-alpha1 selfhosted · Issue #4885 · gitpod-io/gitpod · GitHub

@bmnave: Could you have a look at the logs of ws-daemon?

So I changed the cert files and re-deployed with 0.10.0-alpha1 (seeing the tags for 0.10.0 are broken).
All of my pods now seem to come up as they should - but the ingress is not being defined or completed.

kubectl describe svc proxy | grep -i ingress is not returning anything - and I cannot access the stack.

Any idea where to look?

BTW - if you prefer a separate issue for this so that the original thread is not being hijacked too much, let me know.

For me, it looks like this:

$ kubectl describe svc proxy | grep -i ingress
LoadBalancer Ingress:     161.35.212.100, 164.90.168.126
$ kubectl get svc proxy
NAME    TYPE           CLUSTER-IP     EXTERNAL-IP                     PORT(S)                      AGE
proxy   LoadBalancer   10.43.106.74   161.35.212.100,164.90.168.126   80:31407/TCP,443:30026/TCP   5m13s

Any errors in the proxy logs?

Doesn’t look like it:

{"level":"info","ts":1626862484.4184968,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":""}
{"level":"warn","ts":1626862484.4281757,"msg":"input is not formatted with 'caddy fmt'","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":14}
{"level":"warn","ts":1626862484.4312363,"logger":"admin","msg":"admin endpoint disabled"}
{"level":"info","ts":1626862484.431776,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc00025d810"}
{"level":"info","ts":1626862486.0202777,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/root/.local/share/caddy"}
{"level":"info","ts":1626862486.0205457,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"info","ts":1626862486.020986,"msg":"autosaved config (load with --resume flag)","file":"/root/.config/caddy/autosave.json"}
{"level":"info","ts":1626862486.020996,"msg":"serving initial configuration"}
{"level":"info","ts":1626862486.0210505,"logger":"watcher","msg":"watching config file for changes","config_file":"/etc/caddy/Caddyfile"}

I will try and rebuild the cluster from scratch and see if that makes any difference, as I have installed and uninstalled a few times.

Just to be clear, should k3s still be created without flannel and with calico configured?
Are there any updated requirements listed anywhere?

No updated requirements. Still with calico. FYI, that’s how I test it with k3s: GitHub - corneliusludmann/gitpod-k3s-droplet at gitpod-0-10-0

1 Like

I have 4 worker nodes so I have 4 ws-daemons. It’s a lot of logs that I can’t seem to fit in a forum post. Here’s my trimmed down summary. If there is something more specific you’re looking for, please let me know.

Two of them have messages that look like this:

{"instanceId":"f938dcf2-b028-4266-afe3-94cc66d276ea","level":"error","message":"received pod deletion for a workspace, but have not seen it before. Ignoring update.","serviceContext":{"service":"ws-daemon","version":""},"severity":"ERROR","time":"2021-07-21T13:56:02Z","userId":"c282c8e4-ff62-432c-9eac-9c2423ee1e5b","workspaceId":"crimson-mockingbird-0p7kgyel"}

{"@type":"type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent","ID":"2c9547c811e5969692f7ee613877aa30423dd2b46dabb20b2e355e41fd11635f","containerImage":"","error":"not found\ngithub.com/containerd/containerd/errdefs.init\n\tgithub.com/containerd/containerd@v1.5.2/errdefs/errors.go:45\nruntime.doInit\n\truntime/proc.go:6309\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.main\n\truntime/proc.go:208\nruntime.goexit\n\truntime/asm_amd64.s:1371\ncontainer \"2c9547c811e5969692f7ee613877aa30423dd2b46dabb20b2e355e41fd11635f\" in namespace \"k8s.io\"\ngithub.com/containerd/containerd/errdefs.FromGRPC\n\tgithub.com/containerd/containerd@v1.5.2/errdefs/grpc.go:107\ngithub.com/containerd/containerd.(*remoteContainers).Get\n\tgithub.com/containerd/containerd@v1.5.2/containerstore.go:50\ngithub.com/gitpod-io/gitpod/ws-daemon/pkg/container.(*Containerd).handleContainerdEvent\n\tgithub.com/gitpod-io/gitpod/ws-daemon/pkg/container/containerd.go:161\ngithub.com/gitpod-io/gitpod/ws-daemon/pkg/container.(*Containerd).start\n\tgithub.com/gitpod-io/gitpod/ws-daemon/pkg/container/containerd.go:144\nruntime.goexit\n\truntime/asm_amd64.s:1371","level":"warning","message":"cannot find container we just received a create event for","serviceContext":{"service":"ws-daemon","version":""},"severity":"WARNING","time":"2021-07-21T13:57:32Z"}

The other two look like this:

{"@type":"type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent","ID":"e17b2e7a1229f749e4b978204b14ae523a89ae7de3f7b23b55731e661e3a4ef4","containerImage":"","error":"not found\ngithub.com/containerd/containerd/errdefs.init\n\tgithub.com/containerd/containerd@v1.5.2/errdefs/errors.go:45\nruntime.doInit\n\truntime/proc.go:6309\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.main\n\truntime/proc.go:208\nruntime.goexit\n\truntime/asm_amd64.s:1371\ncontainer \"e17b2e7a1229f749e4b978204b14ae523a89ae7de3f7b23b55731e661e3a4ef4\" in namespace \"k8s.io\"\ngithub.com/containerd/containerd/errdefs.FromGRPC\n\tgithub.com/containerd/containerd@v1.5.2/errdefs/grpc.go:107\ngithub.com/containerd/containerd.(*remoteContainers).Get\n\tgithub.com/containerd/containerd@v1.5.2/containerstore.go:50\ngithub.com/gitpod-io/gitpod/ws-daemon/pkg/container.(*Containerd).handleContainerdEvent\n\tgithub.com/gitpod-io/gitpod/ws-daemon/pkg/container/containerd.go:161\ngithub.com/gitpod-io/gitpod/ws-daemon/pkg/container.(*Containerd).start\n\tgithub.com/gitpod-io/gitpod/ws-daemon/pkg/container/containerd.go:144\nruntime.goexit\n\truntime/asm_amd64.s:1371","level":"warning","message":"cannot find container we just received a create event for","serviceContext":{"service":"ws-daemon","version":""},"severity":"WARNING","time":"2021-07-21T14:17:08Z"}

{"instanceId":"e53504d8-1edf-4214-8373-6c88dfef83f5","level":"info","message":"established IWS server","serviceContext":{"service":"ws-daemon","version":""},"severity":"INFO","time":"2021-07-21T14:17:13Z","userId":"c282c8e4-ff62-432c-9eac-9c2423ee1e5b","workspaceId":"crimson-mockingbird-0p7kgyel"}

{"file":"github.com/opencontainers/runc/libcontainer/cgroups/fscommon/open.go:37","func":"github.com/opencontainers/runc/libcontainer/cgroups/fscommon.prepareOpenat2.func1","level":"debug","msg":"openat2 not available, falling back to securejoin","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec[21299]: =\u003e nsexec container setup","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-0[21299]: ~\u003e nsexec stage-0","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-0[21299]: spawn stage-1","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-0[21299]: -\u003e stage-1 synchronisation loop","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-1[21309]: ~\u003e nsexec stage-1","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-1[21309]: unshare remaining namespace (except cgroupns)","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-1[21309]: spawn stage-2","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-1[21309]: request stage-0 to forward stage-2 pid (21319)","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-2[1]: ~\u003e nsexec stage-2","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-0[21299]: stage-1 requested pid to be forwarded","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-0[21299]: forward stage-1 (21309) and stage-2 (21319) pids to runc","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-1[21309]: signal completion to stage-0","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-1[21309]: \u003c~ nsexec stage-1","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-0[21299]: stage-1 complete","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-0[21299]: \u003c- stage-1 synchronisation loop","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-0[21299]: -\u003e stage-2 synchronisation loop","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-0[21299]: signalling stage-2 to run","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-2[1]: signal completion to stage-0","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-0[21299]: stage-2 complete","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-0[21299]: \u003c- stage-2 synchronisation loop","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-2[1]: \u003c= nsexec container setup","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-0[21299]: \u003c~ nsexec stage-0","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"nsexec-2[1]: booting up go runtime ...","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"child process in init()","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/libcontainer/logs/logs.go:69","func":"github.com/opencontainers/runc/libcontainer/logs.processEntry","level":"debug","msg":"init: closing the pipe to signal completion","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/signals.go:104","func":"main.(*signalHandler).forward","level":"debug","msg":"sending signal to process urgent I/O condition","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/signals.go:104","func":"main.(*signalHandler).forward","level":"debug","msg":"sending signal to process urgent I/O condition","time":"2021-07-21T14:17:13Z"}

{"@type":"type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent","error":"cannot initialize workspace:\n    github.com/gitpod-io/gitpod/content-service/pkg/initializer.InitializeWorkspace\n        github.com/gitpod-io/gitpod/content-service@v0.0.0-00010101000000-000000000000/pkg/initializer/initializer.go:425\n  - no backup found","level":"error","message":"content init failed","severity":"ERROR","time":"2021-07-21T14:17:13Z"}

{"file":"github.com/opencontainers/runc/signals.go:94","func":"main.(*signalHandler).forward","level":"debug","msg":"process exited","pid":21319,"status":42,"time":"2021-07-21T14:17:13Z"}

{"@type":"type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent","error":"content initializer failed","instanceId":"e53504d8-1edf-4214-8373-6c88dfef83f5","level":"error","message":"cannot initialize workspace","serviceContext":{"service":"ws-daemon","version":""},"severity":"ERROR","time":"2021-07-21T14:17:13Z","userId":"c282c8e4-ff62-432c-9eac-9c2423ee1e5b","workspaceId":"e53504d8-1edf-4214-8373-6c88dfef83f5"}

{"@type":"type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent","error":"rpc error: code = Internal desc = cannot initialize workspace: content initializer failed","instanceId":"e53504d8-1edf-4214-8373-6c88dfef83f5","level":"error","message":"InitWorkspace failed","serviceContext":{"service":"ws-daemon","version":""},"severity":"ERROR","time":"2021-07-21T14:17:13Z","userId":"","workspaceId":""}

{"instanceId":"e53504d8-1edf-4214-8373-6c88dfef83f5","level":"debug","message":"DisposeWorkspace called","req":"id:\"e53504d8-1edf-4214-8373-6c88dfef83f5\"","serviceContext":{"service":"ws-daemon","version":""},"severity":"DEBUG","time":"2021-07-21T14:17:13Z","userId":"","workspaceId":""}

{"hooks":1,"instanceId":"e53504d8-1edf-4214-8373-6c88dfef83f5","level":"debug","message":"running lifecycle hooks","serviceContext":{"service":"ws-daemon","version":""},"severity":"DEBUG","state":"disposing","time":"2021-07-21T14:17:13Z","userId":"c282c8e4-ff62-432c-9eac-9c2423ee1e5b","workspaceId":"crimson-mockingbird-0p7kgyel"}

{"instanceId":"e53504d8-1edf-4214-8373-6c88dfef83f5","level":"info","message":"stopped IWS server","serviceContext":{"service":"ws-daemon","version":""},"severity":"INFO","time":"2021-07-21T14:17:13Z","userId":"c282c8e4-ff62-432c-9eac-9c2423ee1e5b","workspaceId":"crimson-mockingbird-0p7kgyel"}

{"instanceId":"e53504d8-1edf-4214-8373-6c88dfef83f5","level":"debug","loc":"/mnt/workingarea/e53504d8-1edf-4214-8373-6c88dfef83f5","message":"did not find a Git working copy - not updating Git status","serviceContext":{"service":"ws-daemon","version":""},"severity":"DEBUG","time":"2021-07-21T14:17:13Z","userId":"c282c8e4-ff62-432c-9eac-9c2423ee1e5b","workspaceId":"crimson-mockingbird-0p7kgyel"}

{"hooks":0,"instanceId":"e53504d8-1edf-4214-8373-6c88dfef83f5","level":"debug","message":"running lifecycle hooks","serviceContext":{"service":"ws-daemon","version":""},"severity":"DEBUG","state":"disposed","time":"2021-07-21T14:17:13Z","userId":"c282c8e4-ff62-432c-9eac-9c2423ee1e5b","workspaceId":"crimson-mockingbird-0p7kgyel"}

{"instanceId":"e53504d8-1edf-4214-8373-6c88dfef83f5","level":"debug","message":"DisposeWorkspace called","req":"id:\"e53504d8-1edf-4214-8373-6c88dfef83f5\"","serviceContext":{"service":"ws-daemon","version":""},"severity":"DEBUG","time":"2021-07-21T14:17:13Z","userId":"","workspaceId":""}

We’re running Ubuntu 18.04 on our cluster nodes and Rancher deploys k8s v1.20.8-rancher1-1 on them.

I have a very odd setup, so I had to deviate quite a bit from the “suggested” self install. See my post here:

for a more complete explanation of my system.

I’ve yet to get Gitpod Self-Hosted to work on my cluster.

Hi @bmnave ,

from looking at the logs:

  1. The cannot find container we just received a create event for is alright and expected.

  2. Here the relevant error is this one: cannot initialize workspace: [...] no backup found.
    That is sth that should really never happen, and I came up with this hypothesis: Could it be that you:

  • started a workspace on an old version of Gitpod Self-Hosted
  • upgraded (and for some reason maybe lost the workspace backup?)
  • re-started that workspace, leading to this error

Ooh…you might be on to something there. Yes, I’ve definitely installed 0.9.0 and mucked with it, uninstalled it, re-installed, etc etc etc, about a billion times so far. And I think I did attempt to helm upgrade from 0.9.0 to 0.10.0.

But between uninstall and reinstall, it seems to “remember” my old work spaces even though I have removed all of the PVCs. Is it possible these are somehow cached in GitLab somehow? I haven’t been able to get some of the workspaces to delete via the Gitpod UI.

I will try a new clean project and report back! Thanks for the help!!
-b

Yup…I see the problem now…

On a new/clean workspace, there is an error in one of the ws-daemon logs that says:

{"error":"git clone https://gitlab.mycompany.com/myproject/testrepo.git . failed (exit status 128): Cloning into '.'...\nfatal: unable to access 'https://gitlab.mycompany.com/myproject/testrepo.git/': SSL certificate problem: unable to get local issuer certificate\n","level":"debug","location":"/dst/testrepo","message":"Running git clone on workspace failed. Retrying in 42.855832355s ...","severity":"DEBUG","sleepTime":42855832355,"stage":"init","time":"2021-07-21T15:45:39Z"}

So I’ve got another CA Cert to mount somewhere…

Seriously though, thanks for the pointer. I was dead in the water regarding where to look next!
-b

@geropl / @corneliusludmann
So I have my CA Cert Chain mounted in the ws-daemon at /etc/ssl/certs/ca-certificates.crt. For some reason that I don’t yet understand, git doesn’t seem to use that when performing the clone. I worked around this issue by setting the environment variable GIT_SSL_CAINFO to /etc/ssl/certs/ca-certificates.crt. The git clone is then successful:

{"level":"info","location":"/dst/my_new_test_project","message":"Git operations complete","severity":"INFO","stage":"init","time":"2021-07-21T16:44:26Z"}

{"file":"github.com/opencontainers/runc/signals.go:94","func":"main.(*signalHandler).forward","level":"debug","msg":"process exited","pid":21261,"status":0,"time":"2021-07-21T16:44:26Z"}

{"hooks":1,"instanceId":"add43608-d913-4987-868b-78522e830014","level":"debug","message":"running lifecycle hooks","serviceContext":{"service":"ws-daemon","version":""},"severity":"DEBUG","state":"ready","time":"2021-07-21T16:44:26Z","userId":"c282c8e4-ff62-432c-9eac-9c2423ee1e5b","workspaceId":"amethyst-horse-1xmohg9o"}

But the Gitpod UI just hangs on the “Creating” screen (Pulling container image…). Here’s what I see in the ws-daemon logs:

{"@type":"type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent","ID":"068d2062b1608f22a9ec7937bfe01b3631da2200f564bbcc99b20b5cc97d361a","containerImage":"","error":"not found\ngithub.com/containerd/containerd/errdefs.init\n\tgithub.com/containerd/containerd@v1.5.2/errdefs/errors.go:45\nruntime.doInit\n\truntime/proc.go:6309\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.main\n\truntime/proc.go:208\nruntime.goexit\n\truntime/asm_amd64.s:1371\ncontainer \"068d2062b1608f22a9ec7937bfe01b3631da2200f564bbcc99b20b5cc97d361a\" in namespace \"k8s.io\"\ngithub.com/containerd/containerd/errdefs.FromGRPC\n\tgithub.com/containerd/containerd@v1.5.2/errdefs/grpc.go:107\ngithub.com/containerd/containerd.(*remoteContainers).Get\n\tgithub.com/containerd/containerd@v1.5.2/containerstore.go:50\ngithub.com/gitpod-io/gitpod/ws-daemon/pkg/container.(*Containerd).handleContainerdEvent\n\tgithub.com/gitpod-io/gitpod/ws-daemon/pkg/container/containerd.go:161\ngithub.com/gitpod-io/gitpod/ws-daemon/pkg/container.(*Containerd).start\n\tgithub.com/gitpod-io/gitpod/ws-daemon/pkg/container/containerd.go:144\nruntime.goexit\n\truntime/asm_amd64.s:1371","level":"warning","message":"cannot find container we just received a create event for","serviceContext":{"service":"ws-daemon","version":""},"severity":"WARNING","time":"2021-07-21T16:48:07Z"}

{"@type":"type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent","error":"context deadline exceeded","instanceId":"add43608-d913-4987-868b-78522e830014","level":"warning","message":"cannot wait for container","serviceContext":{"service":"ws-daemon","version":""},"severity":"WARNING","time":"2021-07-21T16:49:15Z","userId":"c282c8e4-ff62-432c-9eac-9c2423ee1e5b","workspaceId":"amethyst-horse-1xmohg9o"}

{"container":"","instanceId":"add43608-d913-4987-868b-78522e830014","level":"info","message":"dispatch found new workspace container","serviceContext":{"service":"ws-daemon","version":""},"severity":"INFO","time":"2021-07-21T16:49:15Z","userId":"c282c8e4-ff62-432c-9eac-9c2423ee1e5b","workspaceId":"amethyst-horse-1xmohg9o"}

{"@type":"type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent","error":"cannot start governer:\n    github.com/gitpod-io/gitpod/ws-daemon/pkg/daemon.(*CgroupCustomizer).WorkspaceAdded\n        github.com/gitpod-io/gitpod/ws-daemon/pkg/daemon/cgroup_customizer.go:38\n  - not found","instanceId":"add43608-d913-4987-868b-78522e830014","level":"error","message":"dispatch listener failed","serviceContext":{"service":"ws-daemon","version":""},"severity":"ERROR","time":"2021-07-21T16:49:15Z","userId":"c282c8e4-ff62-432c-9eac-9c2423ee1e5b","workspaceId":"amethyst-horse-1xmohg9o"}

{"@type":"type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent","error":"cannot start governer:\n    github.com/gitpod-io/gitpod/ws-daemon/pkg/resources.(*DispatchListener).WorkspaceAdded\n        github.com/gitpod-io/gitpod/ws-daemon/pkg/resources/dispatch.go:82\n  - not found","instanceId":"add43608-d913-4987-868b-78522e830014","level":"error","message":"dispatch listener failed","serviceContext":{"service":"ws-daemon","version":""},"severity":"ERROR","time":"2021-07-21T16:49:15Z","userId":"c282c8e4-ff62-432c-9eac-9c2423ee1e5b","workspaceId":"amethyst-horse-1xmohg9o"}

{"@type":"type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent","ID":"aded4bd49e4a8c5f48ead768b458e0e904160c672dc8d52850b82f16476ba6c8","containerImage":"","error":"not found\ngithub.com/containerd/containerd/errdefs.init\n\tgithub.com/containerd/containerd@v1.5.2/errdefs/errors.go:45\nruntime.doInit\n\truntime/proc.go:6309\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.doInit\n\truntime/proc.go:6286\nruntime.main\n\truntime/proc.go:208\nruntime.goexit\n\truntime/asm_amd64.s:1371\ncontainer \"aded4bd49e4a8c5f48ead768b458e0e904160c672dc8d52850b82f16476ba6c8\" in namespace \"k8s.io\"\ngithub.com/containerd/containerd/errdefs.FromGRPC\n\tgithub.com/containerd/containerd@v1.5.2/errdefs/grpc.go:107\ngithub.com/containerd/containerd.(*remoteContainers).Get\n\tgithub.com/containerd/containerd@v1.5.2/containerstore.go:50\ngithub.com/gitpod-io/gitpod/ws-daemon/pkg/container.(*Containerd).handleContainerdEvent\n\tgithub.com/gitpod-io/gitpod/ws-daemon/pkg/container/containerd.go:161\ngithub.com/gitpod-io/gitpod/ws-daemon/pkg/container.(*Containerd).start\n\tgithub.com/gitpod-io/gitpod/ws-daemon/pkg/container/containerd.go:144\nruntime.goexit\n\truntime/asm_amd64.s:1371","level":"warning","message":"cannot find container we just received a create event for","serviceContext":{"service":"ws-daemon","version":""},"severity":"WARNING","time":"2021-07-21T16:53:27Z"}

The actual workspace pod seems to be hanging on create trying to pull:

reg.gitpod.mycompany.com:3000/remote/add43608-d913-4987-868b-78522e830014

Any ideas where I should look next?
Thanks!
-b