No space left on device in gitpod self-hosted on k8s cluster while build workspace image

/var/gitpod folder consume all my disk space 80G and all pods state change to evicted

Hi @suunavier! Oops, sorry that Gitpod is eating so much disk space. That’s definitely not expected / a bug.

Could you please try to figure out what is using up so much disk space? (For example, by running du -hd 1 in /var/gitpod, then in the biggest subdirectory, etc.)

Hi @jan, /var/gitpod/docker directory consume all my disk space. Thank for your response

Hi @suunavier!

Thanks for identifying this. When it’s the docker/ directory that’s full, you may want to run docker prune to free up some disk space again.

This can be achieved like so:

  1. Switch to the appropriate Kubernetes context (kubectx lists them, use kubectx <context>)

  2. Run this:

kubectl exec <image-builder-pod-name> -c dind -it sh \
export DOCKER_HOST=tcp://localhost:2375
docker system prune --filter "label!=gitpod.io/image-builder/protected"

:bulb: Hint: As the garbage collector is very efficient in removing old workspaces, it is sufficient to run docker system prune without -all. In case all resources have to be removed, it may be neccessary to restart the image-builder by deleting the pod with kubectl delete image-builder-... to force the image-builder to start from scratch and avoid running in a corrupted state.


Also, for completeness’ sake, here is how we deal with other directories if they use too much disk space (might not be applicable to you, but could be useful to others visiting this topic in the future):

  • workspaces/: if the workspace instance is still running: delete the contents; else: the whole folder (check with kubectl get pods | grep <workspace_instance_id>)

  • sync-temp/: delete all contents, should be failsafe

  • theia/: delete older versions which are older than 1 day than the latest one (there might still be workspaces running depending on an older version!)

Hi @jan Thank for your response
I did like you mention but Only Total reclaimed space: 34.66MB, /var/gitpod/docker/vfs consume all disk space and all pods running state change to evicted

hi @suunavier! Can you run docker images on that machine with the full disk and share the output with us? The should show which images docker has downloaded and how much disk space they use.

Hi @meysholdt
I solved no space left on device problem while building workspace image but now I have another problem that is gitpod stuck in acquiring node (1/3) and workspace state in gitpod is pending.

I had this problem too, and it’s been a while so I’m sure you probably don’t need this anymore but in case someone else finds this…

I’m running the helm chart in k3d, and for some reason the way it does local-path PV’s it was causing DIND to mess up and pull the layers for the image-builder over and over again… probably something to do with mis-matched resulting hashes or something.

Either way, getting to the fix. All I had to do was set hostDindData: false. This carries an annoying downside of having to pull all layers and re-build the image every time the image-builder pod restarts. But… it’s working for the most part.

(dindMtu is related to my CNI and k3d’s limitations, so not related to the fix, but I’m showing my full imageBuilder object in the example below)

  components:
    imageBuilder:
      dindMtu: 1450
      hostDindData: false

After I got that working, I wound up with the pending state stuff too, until I ran mount --make-rshared / in my k3d server and worker pods. Just a side note…