Gitpod 0.9.0 Self Hosted w/ Private CA and Private Registry

I’ve been playing with Gitpod Self Hosted, trying to install it on my company’s internal/private Kubernetes Cluster, with private CA Certs, and a private registry, and a private Self Hosted Gitlab instance. I’ll do my best to document all the steps I’ve taken along the way. I feel like I am practically steps away from getting this fully functional. Here’s the lay of the land as best as I can describe it:

  • I’m sitting on my companies network, so imagine my domain is mycompany.com
  • We have a self hosted GitLab instance sitting at: gitlab.mycompany.com
  • We run a Rancher managed Kubernetes cluster. TLS termination happens at a pair of Nginx Load Balancers that have wildcard certs. So our Rancher is at k8s.mycompany.com and we provide Layer7 Ingresses into the cluster using the wildcard cert *.k8s.mycompany.com
  • With the L7 Ingress described above, I get to my Gitpod instance at gitpod.k8s.mycompany.com
  • We run a private GoHarbor registry at harbor.mycompany.com
    • Using GoHarbor’s Proxy Cache feature, I have a DockerHub Proxy at harbor.mycompany.com/dockerhub
    • I’ve created a project in Harbor called gitpod which is available at harbor.mycompany.com/gitpod I also created a robot account and created a secret with the robot account login information.

I used a ton of information from [Self-Hosted] Support for own CA certificates · Issue #2615 · gitpod-io/gitpod · GitHub especially all of @stefanstoeckigt’s great yaml snippets, as well as @jgallucci32’s PR here: Support custom CA certificates in Helm by jgallucci32 · Pull Request #2984 · gitpod-io/gitpod · GitHub I can confirm that following along with the breadcrumbs there to patch the 0.9.0 chart get you most of the way to a functioning Gitpod deployment. The last hang up there was mounting my custom CA certificate in the service sidecar of the image-builder pod at /etc/docker/certs.d/…

My last hurdle is to actually get a workspace to start. Initially, I was getting the following error:

Request createWorkspace failed with message: 13 INTERNAL: cannot resolve workspace image: Error response from daemon: unknown: repository gitpod/workspace-images not found

Unknown Error: { "code": -32603 }

which I commented about here: Gitpod 0.9.0 chart Self hosting - Anyone succeeded

I’ve been playing with the yaml and following the breadcrumbs from Allow air-gap Gitpod installations by corneliusludmann · Pull Request #3228 · gitpod-io/gitpod · GitHub but I have yet to intuit the right set of settings for the workspace images.

Here’s the values.yaml that I am using. Keep in mind that I helm fetch'd the Chart, manually applied the changes from the PR above, repackaged the chart, and then used helm template to build the deployment which I deploy with kubectl apply. Install via helm install currently fails for me, but that’s a separate issue as Longhorn won’t let me deselect it as the default storage class and since I have 2 storage classes set as default, helm can’t figure out what to set for the storage class. Again, separate issue, I’m just using kubectl to work around this atm.

hostname: gitpod.k8s.mycompany.com

certificatesSecret:
  secretName: https-certificates

caBundleSecretName: root-ca
 
persistence:
  storageClass: storage-class-name

rabbitmq:
  auth:
    username: "stuff"
    password: "stuff"

minio:
  accessKey: stuff
  secretKey: stuff

components:
  wsDaemon:
    containerRuntime:
      nodeRoots:
      - /var/lib
      - /run/containerd/io.containerd.runtime.v1.linux/moby
    registryProxyPort: 8082
    volumes:
    - name: root-ca
      secret:
        defaultMode: 420
        secretName: root-ca
    volumeMounts:
    - mountPath: /etc/ssl/certs/ca-certificates.crt
      name: root-ca
      subPath: root-ca.pem
      readOnly: false

  wsManager:
    volumes:
    - name: root-ca
      secret:
        defaultMode: 420
        secretName: root-ca
    volumeMounts:
    - mountPath: /etc/ssl/certs/root-ca.pem
      name: root-ca
      subPath: root-ca.pem
      readOnly: false

  imageBuilder:
    registryCerts: []
    registry: 
      name: harbor.mycompany.com/dockerhub/gitpod 
      secretName: image-builder-registry-secret
    dindImage: harbor.mycompany.com/dockerhub/library/docker:19.03-dind
    alpineImage: harbor.mycompany.com/myproject/alpine-private-ca:3.13
    selfBuildBaseImage: harbor.mycompany.com/myproject/selfbuild:latest
    volumes:
    - name: root-ca
      secret:
        defaultMode: 420
        secretName: root-ca
    volumeMounts:
    - mountPath: /etc/ssl/certs/root-ca.pem
      name: root-ca
      subPath: root-ca.pem
      readOnly: false

  workspace:
    defaultImage:
      imagePrefix: "harbor.mycompany.com/dockerhub/gitpod/"
      imageName: "workspace-full"
    pullSecret:
      secretName: image-builder-registry-secret
    template:
      spec:
        containers:
          - name: workspace
            volumeMounts:
            - mountPath: /etc/ssl/certs/root-ca.pem
              name: root-ca
              subPath: root-ca.pem
              readOnly: false
        volumes:
        - name: root-ca
          secret:
            defaultMode: 420
            secretName: root-ca

  server:
    serverContainer:
      env:
      - name: NODE_TLS_REJECT_UNAUTHORIZED
        value: "0"

docker-registry: 
  enabled: false

authProviders:
  - id: "GitLab"
    host: "gitlab.mycompany.com"
    type: "GitLab"
    oauth:
      clientId: "stuff"
      clientSecret: "stuff"
      callBackUrl: "https://gitpod.k8s.mycompany.com/auth/gitlab/callback"
      settingsUrl: "https://gitlab.mycompany.com/profile/applications"
    description: ""
    icon: ""

With the YAML above, I get the error:

Request startWorkspace failed with message: 7 PERMISSION_DENIED: cannot resolve workspace image: not authorized

From the image-builder service sidecar logs, it looks like it’s trying to push image-builder/selfbuild to docker.io:

{"level":"debug","message":"Successfully tagged gitpod.io/image-builder/selfbuild:680cfc360bdfb55598ec972883eefe1b6bc5abe86b2359d1ebf9bdb235e75235","severity":"DEBUG","time":"2021-06-23T14:50:43Z"}
...
{"a":{"All":false,"Explicit":null},"level":"debug","message":"registry not allowed","ref":{},"reg":"docker.io","serviceContext":{"service":"image-builder","version":""},"severity":"DEBUG","time":"2021-06-23T14:50:51Z"}

Anyone have any ideas what I should try next?
Thanks!
-b

Noticed the comments from @corneliusludmann here: Request createWorkspace failed unknown: artifact

Added the:

components:
  server:
    defaultBaseImageRegistryWhitelist:
      - "harbor.mycompany.com"

bit to my yaml, but still no dice.

2 Likes

I am also interested in that - I do have the feeling Gitpod may not be to blame, Harbor feels weird with other tools like Gitlab (did you manage to actually use Harbor as the gitlab registry???). What I would suggest is trying Gitpod with the docker registry subchart enabled with whatever you need, and then a replication rule between Harbor and that one. You need twice the storage, but in theory you can keep Harbor up to date / scan what you need etc… You can also switch the replication rule from pull to push if you reinstall gitpod, to repopulate the registry. You would need a shared registry secret.

Hi @cyrilcros!
I started out using the gitpod built in registry, but had issues related to the private ca certificates. I’ll try that again and see if I’ve resolved that or not.

I think the problem is the image names/paths though. I haven’t found any documentation yet which explains them well. I’ve mostly guessed the path that I think Gitpod wants to push images to. I’ve successfully configured it to pull images through the Harbor Proxy Cache for Docker.io. I believe it’s still pulling the core gitpod images from gcr.

The error above seems more like it is attempting to push the image to docker.io but that’s not an “allowed” registry. I also tried to add docker.io to the whitelist, but got the same error. I can troll the harbor logs, but as best I can tell it’s not even connecting harbor at the stage that it fails.

Perhaps what I need to try is to fully follow the air gap instructions and pull all the images in and push them into my harbor. But again, I can’t find documentation on which image/path is for what. The mirror script from the airgap instructions looks like it pushes everything to the root project/namespace at the internal registry. I just haven’t figured out how to make that to the harbor project/repository parlance.

-b

@bmnave , how did you manage to get Gitpod to use your Harbor Proxy Cache? Is it something in the Gitpod chart or a cluster level setting (Docker or containerd’s config files)?
Thanks!

It’s the imageBuilder.registry.name setting. Basically you configure a proxy cache project in harbor, for my example, it’s harbor.mycompany.com/dockerhub. If you “docker pull harbor.mycompany.com/dockerhub/project/image:tag” harbor will serve the project/image:tag if it has it, if not, it will pull it from dockerhub and serve it to the client and save a copy.

So I’m trying to use that setting to get the gitpod images that come from dockerhub, but I think it’s also using that path to push the images it builds to. Maybe? I feel like I have the settings flipped somewhere.

-b

Thanks. The way I think things work (and I am no expert here) is that the image-builder fetches images from docker.io or gcr.io and then pushes them to the private registry listed in the chart.
You cannot push anything to a proxy-cache in Docker or Harbor…
I know you can set up mirrors for a node as in here for containerd or for Docker but I don’t know if Gitpod will respect those.

Yep, totally agree. I think I started with imageBuilder.registry.name set to harbor.mycompany.com/gitpod (regular project not a proxy cache) thinking it would pull the image from dockerhub and push to harbor. But then I was getting the error that it couldn’t find the workspace image which is how I got to the config above. But it feels reversed in some way. I feel like workspace.defaultImage should be used for the pull, but that didn’t seem to work.

I confirmed that using the bundled registry doesn’t work with Private CA Certs, at least for testing, I don’t have the certificates with all the wildcards needed and the Workspace fails to start up due to a certificate error.

I’m currently stuck on this error which comes out of the image-builder/dind logs:

level=error msg="Handler for GET /v1.40/distribution/harbor.mycompany.com/gitpod/workspace-images:9601c638b1a5a99455976258df8426a8a19786e697b88d5f8b3185d06ff1fb8d/json returned error: unknown: repository gitpod/workspace-images not found"

My imageBuilder yaml looks like:

  imageBuilder:
    registryCerts: []
    registry: 
      name: harbor.mycompany.com/gitpod
      secretName: image-builder-registry-secret
      path: ""
    workspaceImageName: workspace-full
    volumes:
    - name: root-ca
      secret:
        defaultMode: 420
        secretName: root-ca
    volumeMounts:
    - mountPath: /etc/ssl/certs/root-ca.pem
      name: root-ca
      subPath: root-ca.pem
      readOnly: false

I can’t seem to find a container in DockerHub or gcr.io called “workspace-images”. What am I missing here?

Thanks!

I’ve tried to push the workspace-full image into various repositories (like harbor.mycompany.com/gitpod/workspace-images/workspace-full:latest and harbor.mycompany.com/gitpod/gitpod/workspace-images/workspace-full:latest), trying to find the repository that the code is looking for which results in the error:

Request startWorkspace failed with message: 13 INTERNAL: cannot resolve workspace image: Error response from daemon: unknown: repository gitpod/workspace-images not found

I found the following error in the server log:

{"@type":"type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent","serviceContext":{"service":"server","version":"0.9.0"},"stack_trace":"Error: 13 INTERNAL: cannot resolve workspace image: Error response from daemon: unknown: repository gitpod/workspace-images not found\n    at Object.callErrorFromStatus (/app/node_modules/@grpc/grpc-js/build/src/call.js:31:26)\n    at Object.onReceiveStatus (/app/node_modules/@grpc/grpc-js/build/src/client.js:176:52)\n    at Object.onReceiveStatus (/app/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:336:141)\n    at Object.onReceiveStatus (/app/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:299:181)\n    at /app/node_modules/@grpc/grpc-js/build/src/call-stream.js:145:78\n    at processTicksAndRejections (internal/process/task_queues.js:79:11)","component":"server","severity":"ERROR","time":"2021-07-07T20:35:27.800Z","environment":"production","region":"local","message":"Request startWorkspace failed with internal server error","error":"Error: 13 INTERNAL: cannot resolve workspace image: Error response from daemon: unknown: repository gitpod/workspace-images not found\n    at Object.callErrorFromStatus (/app/node_modules/@grpc/grpc-js/build/src/call.js:31:26)\n    at Object.onReceiveStatus (/app/node_modules/@grpc/grpc-js/build/src/client.js:176:52)\n    at Object.onReceiveStatus (/app/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:336:141)\n    at Object.onReceiveStatus (/app/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:299:181)\n    at /app/node_modules/@grpc/grpc-js/build/src/call-stream.js:145:78\n    at processTicksAndRejections (internal/process/task_queues.js:79:11)","payload":{"method":"startWorkspace","args":["cyan-snipe-811mxip7",{"forceDefaultImage":false},{"_isCancelled":false}]}}

So I guess I’ll go trolling in the NodeJS code to try to figure out what repository it is talking about unless someone has a better idea?

Thanks!
-b

My issue seems awfully close to the issue described here: Custom registry: Problem creating workspace

Unfortunately, the yaml in the included Solution doesn’t seem to be valid anymore. It uses what I think is a now defunct gcr.io repository (gitpod-core-dev/build/) and I don’t see the imagePrefix or imageName options having any effect on my deployment.

Circling back to this again, I looked in my GoHarbor Registry logs and found the following during the workspace creation:

proxy[1481]: 192.168.1.2 - "GET /service/token?account=robotgitpod%2Bgitpod&scope=repository%3Agitpod%2Fworkspace-images%3Apull&service=harbor-registry HTTP/1.1" 200 979 "-" "docker/19.03.15 go/go1.13.15 git-commit/99e3ed8 kernel/4.15.0-147-generic os/linux arch/amd64 UpstreamClient(Go-http-client/1.1)" 0.043 0.044 .
proxy[1481]: 192.168.1.2 - "HEAD /v2/gitpod/workspace-images/manifests/9601c638b1a5a99455976258df8426a8a19786e697b88d5f8b3185d06ff1fb8d HTTP/1.1" 404 0 "-" "docker/19.03.15 go/go1.13.15 git-commit/99e3ed8 kernel/4.15.0-147-generic os/linux arch/amd64 UpstreamClient(Go-http-client/1.1)" 0.007 0.008 .
proxy[1481]: 192.168.1.2 - "GET /v2/gitpod/workspace-images/manifests/9601c638b1a5a99455976258df8426a8a19786e697b88d5f8b3185d06ff1fb8d HTTP/1.1" 404 91 "-" "docker/19.03.15 go/go1.13.15 git-commit/99e3ed8 kernel/4.15.0-147-generic os/linux arch/amd64 UpstreamClient(Go-http-client/1.1)" 0.007 0.008 .

I’m not sure if this is actually a GitPod issue or maybe it’s a docker issue, but on the GoHarbor v2.3.0 release notes (Release v2.3.0 · goharbor/harbor · GitHub) I noticed this part:

Breaking Changes

    The API to GET artifact under public project such as GET /v2/$public_project/$repo/manifests/$tag, will receive a 401 if the request does not carry "Authorization" header, more details see:
    #14711
    #14768

Associated ticket links are:

It seems like what’s happening is that GitPod tries to get the manifests for the gitpod/workspace_images repository, and Harbor is returning a 404. But the login with the robot token is working (HTTP 200)

So I finally figured out a workaround (hack) for the failure to find the workspace image. I did the following:

docker pull gitpod/workspace-full
docker tag gitpod/workspace-full harbor.mycompany.com/gitpod/workspace-images:9601c638b1a5a99455976258df8426a8a19786e697b88d5f8b3185d06ff1fb8d
docker push harbor.mycompany.com/gitpod/workspace-images:9601c638b1a5a99455976258df8426a8a19786e697b88d5f8b3185d06ff1fb8d

And I was able to successfully get past the previous error. GitPod goes through a “Preparing workspace” page. Unfortunately, the screen keeps reloading/flashing and I can’t read the error that it’s getting. When if finally fails, I see the following in the Browser console log:

Request startWorkspace failed with internal server error Error: Connection got disposed.
Error: Request startWorkspace failed with message: Connection got disposed.

Any ideas?
Thanks!
-b