Long startup times?

Hello there,

posting this here because I am not sure if this is an issue affecting other users, too. I get very long launch times of my workspaces at the moment. My repositories are hosted on gitlab, and I have configured prebuilts for my workspaces.

As an example, my workspace coffee-wasp-a1p6q2sw was stuck on “Pulling container images” for over 15 minutes.

Is anyone experiencing the same issues?

Best regards,

Jonas

2 Likes

Hi @j0nes2k!

Looking at the monitoring data, it seems there was peak in new workspaces and the scale up was too slow.

Can you try again, please? It looks good now.

Alex,

all new workspace launches were faster. Thank you for resolving this!

2 Likes

Hey team, just pinging that this seems to be happening currently. (just got a response that “Something went wrong” and a 5 minute debugging period has started).

The workspace I’m trying to reattach to is https://gray-opossum-fu08lok4.ws-us03.gitpod.io/ if that helps debug the issue; hope it’s just me!

Thanks!

No ticket necessary for my case as I was able to startup another instance and didn’t lose any work.

Hope it was just a 1 off issue.

Thanks for the great work!

Hmm, new pods are getting stuck in “Allocating resources” for me now

Please try again

Missed responding to this earlier, but whatever was done did resolve the issue at that time.

Thanks all for the reports!

We’re actively working on making start-up times in Gitpod even faster. Here are a few things to consider:

Pulling container image

This means that Gitpod is currently pulling your workspace’s Docker image onto a node (server) in order to start your workspace there.

Things that are known to make this step slow:

  • Morning scale-up: If it’s between 08:00 and 12:00 in your region (e.g. Europe, Americas) there is probably a massive scale-up happening, with many people trying to start workspaces at the same time and Gitpod scrambling to add & initialize new nodes fast enough. We’ve already made significant improvements here (with “ghost workspaces”), and we’re currently investigating something (we’re not sure what, but we can measure it) that makes this step particularly slow in some cases (e.g. ~10 minutes). Fixing that is currently a top priority for the team.

  • Big images: If your repository uses a very large custom Docker image (e.g. 10GB or more), you will probably see longer “Pulling container image” times for your project, especially during morning scale-up.

  • Old images: If your repository’s custom Docker image is based on one of Gitpod’s default images (e.g. gitpod/workspace-full or derivative), that’s great! This means that, when you start workspace, the node will mostly likely already have most of the layers of your image locally, so it will only pull the few additional layers that you’ve added on top (e.g. just the apt-get install some-tool). However, this assumption breaks down if you don’t modify your custom image for a long time, because Gitpod’s default images do change a little over time. If your custom image is from 6 months ago, chances are it will no longer have many layers in common with other workspaces on the node, so Gitpod will need to pull your entire image onto every node every time you start a workspace. The solution here is to reguarly make small changes to your custom Dockerfile (e.g. by adding ENV TRIGGER_REBUILD=1 and incrementing the value from time to time – as custom images are cached based on a hash of your custom Dockerfile)

  • Old workspaces: If you always restart the same workspace, instead of creating fresh new workspaces for each new task (the recommended way to use Gitpod), the base image of your workspace will gradually become older over time, leading to the same problem as described above, even if your repository doesn’t even specify a custom Docker image (all workspaces are assigned an image, and it stays the same forever, so if your workspace is 6 months old, its image will also be 6 months old). The solution here is to let old workspaces go, and to embrace ephemeral workspaces: Just create a fresh new workspace for each new task, git push your work at the end of the day, and never restart an old workspace!

Allocating resources

This means that Gitpod is currently at max capacity in your region, and is waiting for a new node to be ready before allocating your workspace to it. (All Gitpod clusters have auto-scale, but it may take a bit of time before auto-added new nodes are ready to accept new workspaces.)

With the introduction of “ghost workspaces”, we consider this problem solved, so you should almost never see “Allocating resources” anymore (or just briefly).

Still, if you do see it for a prolonged time, this might be because ghost workspaces are currently disabled in your region, or they’re not working properly. Please absolutely report this if it happens to you!

2 Likes