GitPod performance

Hi guys,

Is there a status page to view GitPod capacity? It gets very slow sometimes and I use GitPod for full-time employment so it can be a bit frustrating, especially when I don’t know if it’s an issue on my side, with my connection or with the capacity.

Hi @erasmuswill!

There is https://status.gitpod.io/ but it doesn’t mention capacity, only if the performance is currently degraded or some services are down.

Gitpod is expected to operate as fast as possible at all times. If it doesn’t, that’s a bug we need to fix (or at least investigate and document, e.g. if the problem is with the user’s connectivity / hardware / browser).

It would be helpful if you can provide more details on the performance symptoms you observe.

For example, here are a few common performance problems we occasionally see:

  • If the user is located far away from the nearest Gitpod cluster, the higher latency may slow down Gitpod page loads & Terminal responsiveness. (We’re considering adding new regional clusters in the coming months, notably to improve APAC latencies.)

  • If you start a workspace during regional cluster scale-up (e.g. during “morning rush-hour” when many people are starting workspaces at the same time) you may see prolonged “Acquiring Node” and “Pulling Docker Images” times during your workspace start-up. (This is something we’re currently working on fixing with predictive cluster scale-up.)

  • If your workflow requires multiple vCPUs to run at maximum speed for several minutes (e.g. compilation), you may see Gitpod’s CPU fair-use policy kick in, temporarily limiting your workspace to fewer vCPUs.

1 Like

It takes sometimes minutes for a workspace to start. After it starts, is there any file left around that indicates the elapsed time from clicking ready-to-code to really-ready-to-code? That would be for that particular instance. If that were recorded for all my workspaces under my account that would be helpful. I am on the free plan. Would a paid plan improve the time to start? Finally, I experimented with tiny alpine images and that did not seem to help.

I’m all the way in South Africa, so my latency is not the best, but usually sub-200:

❯ ping -c 10 ws-eu03.gitpod.io
PING ws-eu03.gitpod.io (34.77.243.141) 56(84) bytes of data.
64 bytes from 141.243.77.34.bc.googleusercontent.com (34.77.243.141): icmp_seq=1 ttl=105 time=166 ms
64 bytes from 141.243.77.34.bc.googleusercontent.com (34.77.243.141): icmp_seq=2 ttl=105 time=165 ms
64 bytes from 141.243.77.34.bc.googleusercontent.com (34.77.243.141): icmp_seq=3 ttl=105 time=165 ms
64 bytes from 141.243.77.34.bc.googleusercontent.com (34.77.243.141): icmp_seq=4 ttl=105 time=165 ms
64 bytes from 141.243.77.34.bc.googleusercontent.com (34.77.243.141): icmp_seq=5 ttl=105 time=166 ms
64 bytes from 141.243.77.34.bc.googleusercontent.com (34.77.243.141): icmp_seq=6 ttl=105 time=166 ms
64 bytes from 141.243.77.34.bc.googleusercontent.com (34.77.243.141): icmp_seq=7 ttl=105 time=165 ms
64 bytes from 141.243.77.34.bc.googleusercontent.com (34.77.243.141): icmp_seq=8 ttl=105 time=165 ms
64 bytes from 141.243.77.34.bc.googleusercontent.com (34.77.243.141): icmp_seq=9 ttl=105 time=166 ms
64 bytes from 141.243.77.34.bc.googleusercontent.com (34.77.243.141): icmp_seq=10 ttl=105 time=165 ms

ws-eu03.gitpod.io ping statistics —
10 packets transmitted, 10 received, 0% packet loss, time 21ms
rtt min/avg/max/mdev = 164.639/165.300/166.010/0.481 ms

I do have other workspaces that work fine so I think it may be an environmental issue. The specific project has not been optimised for GitPod as yet, so it’s been that one pod for the past couple of months.

There are certain rush hours when a lot of workspaces are created at once and the time until a workspace is ready increases massively. We are working on that problem.

Pushing this issue up again.
In the last couple of days I have faced recurring performance issues on multiple Gitpods.
The issues are appearing out of nothing, for example: On my current Gitpod I just started my docker containers, changed some lines of code and … zing … Now it is that worse that I even have got an input lag within the terminal, the services within my containers are unsable.

I am sitting here in the middle of Germany, having a good internet connection.
What is the current state of:

There are certain rush hours when a lot of workspaces are created at once and the time until a workspace is ready increases massively. We are working on that problem.

?

2 Likes

I also currently observe my workspaces being very slow.
Over the last hour the performance degraded so much that it is currently impossible for me to work with Gitpod :frowning:

The main problems I’m experiencing right now with Java / Typscript:

  • Language servers take several minutes (5+) for initial compilation
  • Language servers take close to a minute for autocompletion proposals (impossible to work with)
  • Dev Servers (npm / Karaf) take several minutes to start up and are also very slow afterwards
    I’m working with Java and Typescript mainly and both language servers take hours to update whenever something is typed.

WorkspaceID: tomato-swordfish-19vyu4iw
However, I have started one other workspace and it seems to have the same performance problems as mentioned above.

Kind regards,
Thomas

I want to report perf degradation also. I thought there something wrong with my code, after starting a simple NodeJS process in a container the terminal was just hanging. Turned out I have to wait ~30 seconds just for a simple console.log to appear.

Same here, some of my workspaces are near to unusable in this state.

Hi @awmath, @wintercounter, @Sandared, thank you all for highlighting this. We’re looking into it right now - will post updates here. :slight_smile:

1 Like

Maybe helpful: I just recreated my setup from scratch on a local machine … it seems Java Tooling is slow there too… not as slow as on Gitpod but it seems slower to me than the last time I worked with it locally. However, that might by a perception error on my side.

EDIT: Typescript tooling too … VSCode in general seems to be slower … pure maven build/npm install are as fast as usual locally

1 Like

FYI: After recreating my workspaces. They are now back to speed again. But since stopping them was slow as well I can’t tell wether the recreation or just waiting helped at this point.

1 Like

Hi again folks. I wanted to update you all as promised.

During our investigation we found that there were several miners running in our eu16 cluster. We blocked these miners and cleaned up their workspaces, which were contributing to the performance issues that you were all experiencing.

Thanks for your patience with this, if you have any more issues - please reach out!

1 Like

Hi @Pauline
we are currently facing extremly poor performance within our gitpods:

https://beige-bobolink-rnbhxlum.ws-eu18.gitpod.io

A colleague of mine is also having problems with another Gitpod running in eu18.

Same for me in EU18. The pod is so slow that I can’t even save changes and it asks me to reload the editor. I also can’t push changes because the code cleanup can’t finish. So not usable at all for 90 minutes now.

Morning @_thiemo @sgurlt!

I’ve just had a look and there seems to be nothing out of the ordinary going on. Are you both still experiencing this poor performance with new workspaces?

The workspace behaves normally again after not being responsive for 2 hours. Are you monitoring workspace performance continuously?

1 Like

Hi @_thiemo! Yes, we are. Thank you for the feedback, I’ll make sure that this is seen by the right team. I’ll close this thread for now but if there is anything else that I can help with, please get in touch.