Gitpot installation failed; MySQL Pod in CrashLoopBackOff

Dear all,

my first installation of GitPod failed because of MySql.

$ helm upgrade --install $(for i in $(cat configuration.txt); do echo -e "-f $i"; done) gitpod .
Error: UPGRADE FAILED: post-upgrade hooks failed: timed out waiting for the condition

The MySQL pod has a CrashLoopBackOff status:

$ kubectl get pods 
NAME                                 READY   STATUS             RESTARTS   AGE
mysql-596ff8656b-x59lh               0/1     CrashLoopBackOff   12         38m

This is a self-hosted GitPod where MySQL is installed on the same machine as the Master Node.

What should I do?

You could try to add --timeout 60m to your helm command like this:

$ helm upgrade --timeout 60m --install $(for i in $(cat configuration.txt); do echo -e "-f $i"; done) gitpod 

Please let me know if this helps.

It does not help. Is there another way to troubleshoot this?

What does kubectl describe pod mysql-596ff8656b-x59lh and kubectl logs mysql-596ff8656b-x59lh say?

$ kubectl describe pod mysql-596ff8656b-9tkz4

Summary

Name: mysql-596ff8656b-9tkz4
Namespace: default
Priority: 0
Node: binderdev-k8s-3.novalocal/130.183.216.91
Start Time: Wed, 09 Sep 2020 10:06:19 +0000
Labels: app=mysql
pod-template-hash=596ff8656b
release=gitpod
Annotations:
Status: Running
IP: 10.244.2.139
IPs:
IP: 10.244.2.139
Controlled By: ReplicaSet/mysql-596ff8656b
Init Containers:
remove-lost-found:
Container ID: docker://348b809c2012b4d41d04f2e97f94a772c83342474c2f490d2593a00937876289
Image: busybox:1.29.3
Image ID: docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796
Port:
Host Port:
Command:
rm
-fr
/var/lib/mysql/lost+found
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 09 Sep 2020 10:06:24 +0000
Finished: Wed, 09 Sep 2020 10:06:24 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 10m
memory: 10Mi
Environment:
Mounts:
/var/lib/mysql from data (rw)
Containers:
mysql:
Container ID: docker://9111f97136fb3306bd82e484a266252dd4e7e2a19e6f783761e705ad6a4c4870
Image: mysql:5.7.28
Image ID: docker-pullable://mysql@sha256:b38555e593300df225daea22aeb104eed79fc80d2f064fde1e16e1804d00d0fc
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 09 Sep 2020 10:32:30 +0000
Finished: Wed, 09 Sep 2020 10:32:30 +0000
Ready: False
Restart Count: 10
Requests:
cpu: 100m
memory: 256Mi
Liveness: exec [mysqladmin ping] delay=30s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [mysqladmin ping] delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
MYSQL_ALLOW_EMPTY_PASSWORD: true
MYSQL_ROOT_PASSWORD: <set to the key ‘mysql-root-password’ in secret ‘db-password’> Optional: true
MYSQL_PASSWORD: <set to the key ‘mysql-password’ in secret ‘db-password’> Optional: true
MYSQL_USER:
MYSQL_DATABASE:
Mounts:
/docker-entrypoint-initdb.d from init-scripts (rw)
/var/lib/mysql from data (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql
ReadOnly: false
init-scripts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: db-init-scripts
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 31m Successfully assigned default/mysql-596ff8656b-9tkz4 to binderdev-k8s-3.novalocal
Warning FailedMount 31m kubelet, binderdev-k8s-3.novalocal MountVolume.SetUp failed for volume “init-scripts” : failed to sync configmap cache: timed out waiting for the condition
Normal Pulled 31m kubelet, binderdev-k8s-3.novalocal Container image “busybox:1.29.3” already present on machine
Normal Created 31m kubelet, binderdev-k8s-3.novalocal Created container remove-lost-found
Normal Started 31m kubelet, binderdev-k8s-3.novalocal Started container remove-lost-found
Normal Pulled 30m (x4 over 31m) kubelet, binderdev-k8s-3.novalocal Container image “mysql:5.7.28” already present on machine
Normal Created 30m (x4 over 31m) kubelet, binderdev-k8s-3.novalocal Created container mysql
Normal Started 30m (x4 over 31m) kubelet, binderdev-k8s-3.novalocal Started container mysql
Warning BackOff 66s (x145 over 30m) kubelet, binderdev-k8s-3.novalocal Back-off restarting failed container

$ kubectl logs mysql-596ff8656b-9tkz4
2020-09-09 10:37:36+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.28-1debian9 started.
chown: changing ownership of '/var/lib/mysql/': Operation not permitted

So it’s the mysql that is not able to change the ownership of /var/lib/mysql/. Could you please provide more information?

Sorry, for the delay of my answer, I was on holidays. It’s a vanilla cluster installed on OpenStack virtual machines with 3 nodes and CentOS 7.

NFS is indeed used in my case, but, when I look at your link, I see that somebody suggests to exec into the mysql pod. The problem is that this pod is in CrashLoopBackOff state and I cannot do that.

Hi!
Is there anyone would could help me further?
Thanks!

Just trying a last time. :slight_smile:

Hi

i’ve been playing with this for some time and finally got it running
albeit just waiting for that session fix because my domain name is too long, (has 3 parts)

i installed mysql seperately, then ran the scripts to create the database and user etc

then in values.yaml i disabled the inbuilt db connection and added my config
I did the same with minio and it worked

seperately i have also ran the docker version directly and got that to run too seperate to kubernetes

there’s differences between helm charts install, docker install etc ( all new to me)

could do with a proper write up on how to config this outside of GCP, AWS especially since gitpod have started working with gitlab at some level, there has to be many out there that are on-prem only and have a self hosted version of gitlab that want to integrate with gitpod

at the moment I think its very confusing, there’s docker versions, there selfhosted version,
code here


and then also here

took me a good few weeks to get my head round all this.

@gitpod team - happy to support producing documentation for those running vanilla kubernetes, just a bit more direction required

Thank you for replying! I concur with what you say. I also have access to a self-hosted GitLab and would welcome a more extended documentation about setting up GitHub in this configuration.

I posted an issue at gitpod-io/self-hosted, but no answer so far, unfortunately.

try posting in the non self hosted section

seems to be more active

had a look at your issue

why don’t u install mysql completely seperately on your os

then go here

run those scripts against mysql

and then in values.yaml

db:
host: localhost or IP address of mysql instead (may need to enable remote connections)
port: 3306
password: xxx

and further down in the values.yaml

disable mysql pod

mysql:
enabled: false
fullnameOverride: mysql
testFramework:
enabled: false

I will try this, thanks! MySQL is already separately installed.

Hi,

it is confusing with all the projects at the moment and we are aware of it. Going open-source is a lot of work but it should be no excuse.
To clear things up:
gitpod-io/self-hosted is the repository which includes a helm chart to install Gitpod. It includes the stable versions of the Gitpod self-hosted deployment.
gitpod-io/gitpod includes the whole open-source project for Gitpod.

self-hosted does include mysql, minio and docker-registry as dependencies but it is encouraged to install them separately as the versions used in the chart get old very quick and keeping track of them including testing is problematic.

The helm chart in gitpod is up to date and can be used as well and in the same way.

@maxclac, I am sorry that you did not get any answer yet. I have answered now but as @hm2075 has already suggested, the separate installation of MySQL is highly encouraged.

Hi!
Thank you @wulfthimm and @hm2075 for your answers!
As I said, I already have a separate MySQL.

I had the idea to add a --debug parameter to my helm command, and I noticed some issue with db-migrations:

$ helm upgrade --debug --install $(for i in $(cat configuration.txt); do echo -e "-f $i"; done) gitpod .

history.go:52: [debug] getting history for release gitpod
upgrade.go:121: [debug] preparing upgrade for gitpod
upgrade.go:129: [debug] performing update for gitpod
upgrade.go:308: [debug] creating upgraded release for gitpod
client.go:173: [debug] checking 110 resources for changes
client.go:436: [debug] Looks like there are no changes for NetworkPolicy “dashboard-deny-all-allow-explicit”
client.go:436: [debug] Looks like there are no changes for NetworkPolicy “db-deny-all-allow-explicit”
client.go:436: [debug] Looks like there are no changes for NetworkPolicy “image-builder”
client.go:436: [debug] Looks like there are no changes for NetworkPolicy “messagebus-deny-all-allow-explicit”
client.go:436: [debug] Looks like there are no changes for NetworkPolicy “proxy-deny-all-allow-explicit”
client.go:436: [debug] Looks like there are no changes for NetworkPolicy “registry-facade”
client.go:436: [debug] Looks like there are no changes for NetworkPolicy “server-deny-all-allow-explicit”
client.go:436: [debug] Looks like there are no changes for NetworkPolicy “workspace-default”
client.go:436: [debug] Looks like there are no changes for NetworkPolicy “ws-manager”
client.go:436: [debug] Looks like there are no changes for NetworkPolicy “ws-scheduler”
client.go:436: [debug] Looks like there are no changes for NetworkPolicy “ws-sync”
client.go:436: [debug] Looks like there are no changes for PodSecurityPolicy “default-ns-privileged”
client.go:436: [debug] Looks like there are no changes for ServiceAccount “nobody”
client.go:436: [debug] Looks like there are no changes for ServiceAccount “dashboard”
client.go:436: [debug] Looks like there are no changes for ServiceAccount “db-migrations”
client.go:436: [debug] Looks like there are no changes for ServiceAccount “db”
client.go:436: [debug] Looks like there are no changes for ServiceAccount “image-builder”
client.go:436: [debug] Looks like there are no changes for ServiceAccount “messagebus”
client.go:436: [debug] Looks like there are no changes for ServiceAccount “node-daemon”
client.go:436: [debug] Looks like there are no changes for ServiceAccount “proxy”
client.go:436: [debug] Looks like there are no changes for ServiceAccount “registry-facade”
client.go:436: [debug] Looks like there are no changes for ServiceAccount “server”
client.go:436: [debug] Looks like there are no changes for ServiceAccount “workspace”
client.go:436: [debug] Looks like there are no changes for ServiceAccount “ws-manager-bridge”
client.go:436: [debug] Looks like there are no changes for ServiceAccount “ws-manager-node”
client.go:436: [debug] Looks like there are no changes for ServiceAccount “ws-manager”
client.go:436: [debug] Looks like there are no changes for ServiceAccount “ws-scheduler”
client.go:436: [debug] Looks like there are no changes for ServiceAccount “ws-sync”
client.go:436: [debug] Looks like there are no changes for Secret “db-password”
client.go:436: [debug] Looks like there are no changes for Secret “messagebus-certificates-secret-core”
client.go:436: [debug] Looks like there are no changes for Secret “server-proxy-apikey”
client.go:436: [debug] Looks like there are no changes for ConfigMap “registry-config”
client.go:436: [debug] Looks like there are no changes for ConfigMap “auth-providers-config”
client.go:436: [debug] Looks like there are no changes for ConfigMap “dashboard-config”
client.go:436: [debug] Looks like there are no changes for ConfigMap “db-init-scripts”
client.go:436: [debug] Looks like there are no changes for ConfigMap “image-builder-config”
client.go:436: [debug] Looks like there are no changes for ConfigMap “proxy-config-nginx”
client.go:436: [debug] Looks like there are no changes for ConfigMap “registry-facade-config”
client.go:436: [debug] Looks like there are no changes for ConfigMap “workspace-template”
client.go:436: [debug] Looks like there are no changes for ConfigMap “ws-manager-bridge-config”
client.go:436: [debug] Looks like there are no changes for ConfigMap “ws-manager-config”
client.go:436: [debug] Looks like there are no changes for ConfigMap “ws-manager-node-config”
client.go:436: [debug] Looks like there are no changes for ConfigMap “ws-scheduler-config”
client.go:436: [debug] Looks like there are no changes for ConfigMap “ws-sync-config”
client.go:436: [debug] Looks like there are no changes for ClusterRole “default-ns-psp:privileged”
client.go:436: [debug] Looks like there are no changes for ClusterRole “default-ns-psp:restricted-root-user”
client.go:436: [debug] Looks like there are no changes for ClusterRole “default-ns-psp:unprivileged”
client.go:436: [debug] Looks like there are no changes for ClusterRole “default-ns-image-builder”
client.go:436: [debug] Looks like there are no changes for ClusterRole “default-ns-node-daemon”
client.go:436: [debug] Looks like there are no changes for ClusterRole “default-ns-ws-manager-node”
client.go:436: [debug] Looks like there are no changes for ClusterRole “default-ns-ws-scheduler”
client.go:436: [debug] Looks like there are no changes for ClusterRole “default-ns-ws-sync”
client.go:436: [debug] Looks like there are no changes for ClusterRoleBinding “default-ns-nobody”
client.go:436: [debug] Looks like there are no changes for ClusterRoleBinding “default-ns-node-daemon”
client.go:436: [debug] Looks like there are no changes for ClusterRoleBinding “default-ns-ws-manager-node”
client.go:436: [debug] Looks like there are no changes for ClusterRoleBinding “default-ns-ws-scheduler”
client.go:436: [debug] Looks like there are no changes for Role “node-daemon”
client.go:436: [debug] Looks like there are no changes for Role “server”
client.go:436: [debug] Looks like there are no changes for Role “workspace”
client.go:436: [debug] Looks like there are no changes for Role “ws-manager”
client.go:436: [debug] Looks like there are no changes for RoleBinding “dashboard”
client.go:436: [debug] Looks like there are no changes for RoleBinding “db-migrations”
client.go:436: [debug] Looks like there are no changes for RoleBinding “db”
client.go:436: [debug] Looks like there are no changes for RoleBinding “image-builder-rb”
client.go:436: [debug] Looks like there are no changes for RoleBinding “messagebus”
client.go:436: [debug] Looks like there are no changes for RoleBinding “node-daemon:node-daemon”
client.go:436: [debug] Looks like there are no changes for RoleBinding “proxy”
client.go:436: [debug] Looks like there are no changes for RoleBinding “registry-facade”
client.go:436: [debug] Looks like there are no changes for RoleBinding “server”
client.go:436: [debug] Looks like there are no changes for RoleBinding “server-unprivileged”
client.go:436: [debug] Looks like there are no changes for RoleBinding “workspace”
client.go:436: [debug] Looks like there are no changes for RoleBinding “ws-manager-bridge”
client.go:436: [debug] Looks like there are no changes for RoleBinding “ws-manager”
client.go:436: [debug] Looks like there are no changes for RoleBinding “ws-manager-unpriviledged”
client.go:436: [debug] Looks like there are no changes for RoleBinding “ws-sync-rb”
client.go:436: [debug] Looks like there are no changes for Service “registry”
client.go:436: [debug] Looks like there are no changes for Service “dashboard”
client.go:436: [debug] Looks like there are no changes for Service “db”
client.go:436: [debug] Looks like there are no changes for Service “image-builder”
client.go:436: [debug] Looks like there are no changes for Service “messagebus”
client.go:436: [debug] Looks like there are no changes for Service “proxy”
client.go:436: [debug] Looks like there are no changes for Service “blobserve”
client.go:436: [debug] Looks like there are no changes for Service “registry-facade”
client.go:436: [debug] Looks like there are no changes for Service “server”
client.go:436: [debug] Looks like there are no changes for Service “theia-server”
client.go:436: [debug] Looks like there are no changes for Service “ws-manager”
client.go:436: [debug] Looks like there are no changes for Deployment “registry”
client.go:436: [debug] Looks like there are no changes for Deployment “dashboard”
client.go:436: [debug] Looks like there are no changes for Deployment “registry-facade”
client.go:436: [debug] Looks like there are no changes for Deployment “ws-manager-bridge”
client.go:436: [debug] Looks like there are no changes for Deployment “ws-scheduler”
client.go:254: [debug] Starting delete for “db-migrations” Job
client.go:283: [debug] jobs.batch “db-migrations” not found
client.go:108: [debug] creating 1 resource(s)
client.go:463: [debug] Watching for changes to Job db-migrations with timeout of 5m0s
client.go:491: [debug] Add/Modify event for db-migrations: ADDED
client.go:530: [debug] db-migrations: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
client.go:491: [debug] Add/Modify event for db-migrations: MODIFIED
client.go:530: [debug] db-migrations: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
client.go:491: [debug] Add/Modify event for db-migrations: MODIFIED
client.go:530: [debug] db-migrations: Jobs active: 1, jobs failed: 1, jobs succeeded: 0
client.go:254: [debug] Starting delete for “db-migrations” Job
upgrade.go:367: [debug] warning: Upgrade “gitpod” failed: post-upgrade hooks failed: timed out waiting for the condition
Error: UPGRADE FAILED: post-upgrade hooks failed: timed out waiting for the condition
helm.go:94: [debug] post-upgrade hooks failed: timed out waiting for the condition
UPGRADE FAILED
main.newUpgradeCmd.func2
/home/circleci/helm.sh/helm/cmd/helm/upgrade.go:156
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:842
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:93
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1373

What do you think? Should I open a new thread for this?

Best regards

@maxclac: Have you tried it again with a higher timeout?

1 Like

@corneliusludmann yes, I did. Unfortunately, it doesn’t help.

Asking for the logs of the MySQL pod is very instructive:

$ kubectl logs pod/mysql-65c5b9f8f9-msbjc

2020-10-28 13:39:00+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.28-1debian9 started.
2020-10-28 13:39:00+00:00 [Note] [Entrypoint]: Switching to dedicated user ‘mysql’
2020-10-28 13:39:00+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.28-1debian9 started.
2020-10-28T13:39:00.618971Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2020-10-28T13:39:00.620296Z 0 [Note] mysqld (mysqld 5.7.28) starting as process 1 …
2020-10-28T13:39:00.623056Z 0 [Note] InnoDB: PUNCH HOLE support available
2020-10-28T13:39:00.623092Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-10-28T13:39:00.623096Z 0 [Note] InnoDB: Uses event mutexes
2020-10-28T13:39:00.623098Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2020-10-28T13:39:00.623100Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2020-10-28T13:39:00.623102Z 0 [Note] InnoDB: Using Linux native AIO
2020-10-28T13:39:00.623305Z 0 [Note] InnoDB: Number of pools: 1
2020-10-28T13:39:00.623839Z 0 [Note] InnoDB: Using CPU crc32 instructions
2020-10-28T13:39:00.625295Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2020-10-28T13:39:00.632384Z 0 [Note] InnoDB: Completed initialization of buffer pool
2020-10-28T13:39:00.634613Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2020-10-28T13:39:00.645977Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
2020-10-28T13:39:00.651238Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2020-10-28T13:39:00.651421Z 0 [Note] InnoDB: Setting file ‘./ibtmp1’ size to 12 MB. Physically writing the file full; Please wait …
2020-10-28T13:39:00.667031Z 0 [Note] InnoDB: File ‘./ibtmp1’ size is now 12 MB.
2020-10-28T13:39:00.668240Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
2020-10-28T13:39:00.668257Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
2020-10-28T13:39:00.668703Z 0 [Note] InnoDB: Waiting for purge to start
2020-10-28T13:39:00.720177Z 0 [Note] InnoDB: 5.7.28 started; log sequence number 12444935
2020-10-28T13:39:00.722128Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2020-10-28T13:39:00.722327Z 0 [Note] Plugin ‘FEDERATED’ is disabled.
2020-10-28T13:39:00.724106Z 0 [Note] InnoDB: Buffer pool(s) load completed at 201028 13:39:00
2020-10-28T13:39:00.733306Z 0 [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
2020-10-28T13:39:00.733323Z 0 [Note] Skipping generation of SSL certificates as certificate files are present in data directory.
2020-10-28T13:39:00.736656Z 0 [Warning] CA certificate ca.pem is self signed.
2020-10-28T13:39:00.737103Z 0 [Note] Skipping generation of RSA key pair as key files are present in data directory.
2020-10-28T13:39:00.737568Z 0 [Note] Server hostname (bind-address): ‘*’; port: 3306
2020-10-28T13:39:00.738394Z 0 [Note] IPv6 is available.
2020-10-28T13:39:00.738413Z 0 [Note] - ‘::’ resolves to ‘::’;
2020-10-28T13:39:00.738597Z 0 [Note] Server socket created on IP: ‘::’.
2020-10-28T13:39:00.740003Z 0 [Warning] Insecure configuration for --pid-file: Location ‘/var/run/mysqld’ in the path is accessible to all OS users. Consider choosing a different directory.
2020-10-28T13:39:00.831690Z 0 [Note] Event Scheduler: Loaded 0 events
2020-10-28T13:39:00.831886Z 0 [Note] mysqld: ready for connections.
Version: ‘5.7.28’ socket: ‘/var/run/mysqld/mysqld.sock’ port: 3306 MySQL Community Server (GPL)
2020-10-28T13:39:13.981362Z 3 [Note] Access denied for user ‘gitpod’@‘172.17.0.10’ (using password: YES)
2020-10-28T13:39:14.357477Z 4 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:39:14.708762Z 5 [Note] Access denied for user ‘gitpod’@‘172.17.0.10’ (using password: YES)
2020-10-28T13:39:17.779167Z 6 [Note] Access denied for user ‘gitpod’@‘172.17.0.15’ (using password: YES)
2020-10-28T13:39:18.814378Z 7 [Note] Access denied for user ‘gitpod’@‘172.17.0.15’ (using password: YES)
2020-10-28T13:39:19.373465Z 8 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:39:24.383108Z 10 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:39:29.400174Z 11 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:39:30.387812Z 12 [Note] Access denied for user ‘gitpod’@‘172.17.0.10’ (using password: YES)
2020-10-28T13:39:32.373347Z 15 [Note] Access denied for user ‘gitpod’@‘172.17.0.15’ (using password: YES)
2020-10-28T13:39:34.417658Z 16 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:39:39.430572Z 17 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:39:44.439616Z 20 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:39:49.447523Z 21 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:39:54.451451Z 24 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:39:56.380597Z 25 [Note] Access denied for user ‘gitpod’@‘172.17.0.10’ (using password: YES)
2020-10-28T13:39:58.374502Z 26 [Note] Access denied for user ‘gitpod’@‘172.17.0.15’ (using password: YES)
2020-10-28T13:39:59.462717Z 27 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:40:04.467487Z 30 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:40:09.475051Z 31 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:40:14.478861Z 34 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:40:19.487351Z 35 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:40:24.495226Z 38 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:40:29.504307Z 39 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:40:34.509313Z 42 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:40:38.373973Z 43 [Note] Access denied for user ‘gitpod’@‘172.17.0.10’ (using password: YES)
2020-10-28T13:40:39.383082Z 44 [Note] Access denied for user ‘gitpod’@‘172.17.0.15’ (using password: YES)
2020-10-28T13:40:39.522665Z 45 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:40:44.528267Z 48 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:40:49.534087Z 49 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:40:54.538666Z 52 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:40:59.549729Z 53 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:41:04.556335Z 56 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:41:09.566326Z 57 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:41:14.569852Z 60 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:41:19.575475Z 61 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:41:24.579304Z 64 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:41:29.586636Z 65 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:41:34.594062Z 68 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:41:39.606637Z 69 [Note] Access denied for user ‘gitpod’@‘172.17.0.6’ (using password: YES)
2020-10-28T13:41:41.194456Z 70 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:41:46.210927Z 73 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:41:51.223995Z 74 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:41:56.236228Z 77 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:42:01.239153Z 78 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:42:01.333738Z 80 [Note] Access denied for user ‘gitpod’@‘172.17.0.10’ (using password: YES)
2020-10-28T13:42:06.246445Z 82 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:42:06.386902Z 83 [Note] Access denied for user ‘gitpod’@‘172.17.0.15’ (using password: YES)
2020-10-28T13:42:11.249920Z 84 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:42:16.255842Z 87 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:42:21.258384Z 88 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:42:26.267216Z 91 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:42:31.270636Z 92 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:42:36.275943Z 95 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:42:41.278585Z 97 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:42:46.286515Z 99 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:42:51.289204Z 101 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:42:56.293319Z 103 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:43:01.294778Z 105 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:43:06.302151Z 107 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:43:11.303785Z 109 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:43:16.309584Z 111 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:43:21.311277Z 113 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:43:26.318373Z 115 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:43:31.321165Z 117 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:43:36.330788Z 119 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:43:41.333632Z 121 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:43:46.342641Z 123 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:43:51.344786Z 125 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:43:56.353885Z 127 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:44:01.357047Z 129 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:44:06.395209Z 131 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:44:11.400904Z 133 [Note] Access denied for user ‘gitpod’@‘172.17.0.16’ (using password: YES)
2020-10-28T13:44:48.392844Z 141 [Note] Access denied for user ‘gitpod’@‘172.17.0.15’ (using password: YES)
2020-10-28T13:44:53.396449Z 144 [Note] Access denied for user ‘gitpod’@‘172.17.0.10’ (using password: YES)

The gitpod user cannot have access using certain hosts. How can I solve this?