Prebuild not prebuilding

Anyone know why this pre-builds fine:

tasks:
  - before: |
      printf "\n[settings]\napi_key = $WAKA_TIME_API_KEY\n" > ~/.wakatime.cfg
    prebuild: |
      docker-compose up -d
      yarn install
      yarn run database:migrate
    command: |
      docker-compose down
      docker network prune -f
      yarn run dev

But when I follow the documentation and replace the deprecated prebuild command with init it no longer pre-builds and instead runs on workspace start:

tasks:
  - before: |
      printf "\n[settings]\napi_key = $WAKA_TIME_API_KEY\n" > ~/.wakatime.cfg
    init: |
      docker-compose up -d
      yarn install
      yarn run database:migrate
    command: |
      docker-compose down
      docker network prune -f
      yarn run dev

Here’s the repo:

I’m probably doing something dumb so all advice welcome!

Hi @jmcelreavey!

We’re currently implementing a “prebuilds dashboard” page in Gitpod where you can check the status and the logs of your project’s prebuilds (and e.g. see why they failed / with which error message). Unfortunately, this isn’t quite finished yet, so we still need workarounds for now.

I looked at your project’s last 10 prebuilds in our database, and saw a lot of “headless task failed” messages. This means that your init task generates some error and exits with a non-zero return code.

I suggest you try to manually trigger a new prebuild for your project by using the special URL prefix (i.e. https://gitpod.io/#prebuild/https://github.com/jmcelreavey/dev-job).

I tried that, and the prebuild seemed to work fine, but then when the new workspace started, its third Terminal (called “Application”) failed with this error:

75a8d2dc6608: Pull complete
Digest: sha256:6e177005751b9055a5af897a57f8433a54b1c219c08a8433efa53cd4c7206ddd
Status: Downloaded newer image for mariadb:10.6
Creating dev-job_db_1      ... done
Creating dev-job_db-test_1 ... done
$ blitz prisma migrate dev
    Error: Cannot find module '@blitzjs/server'
    Require stack:
    - /workspace/dev-job/noop.js
    Code: MODULE_NOT_FOUND
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

Maybe that’s somehow related?

Also, I notice that you use the YAML multi-line scalar syntax (i.e. “pipe”) for commands:

    command: |
      docker-compose down
      docker network prune -f
      yarn run dev

I’d normally advise against this, because I believe this will still run yarn run dev even if one of the above commands fails. Instead, I generally prefer using the YAML multi-line folded syntax (i.e. “greater-than”):

    command: >
      docker-compose down &&
      docker network prune -f &&
      yarn run dev

But maybe this is just a matter of personal preference.


To recap, I think the best current way to trouble-shoot a prebuild command is to:

  1. Commit your .gitpod.yml prebuild tasks, and push the commit to some branch (e.g. your project’s default branch, or some temporary test branch)
  2. Manually trigger a new prebuild for the branch you just pushed to (or simply for your project) by adding the URL prefix gitpod.io/#prebuild/ in front of the branch or project URL

Then, watch the logs while the prebuild is in progress, notice any errors and iterate.

This will become much more convenient in our upcoming Teams & Projects UI (ships in a few weeks), thanks to an overview of all project prebuilds (and their status/logs) and a side-by-side project configuration “wizard” (with a YAML editor on left and a build output on right, to allow faster iterations).

1 Like

Thanks for looking into this for me, that’s a really detailed response and extremely helpful. I’ll move away from the pipe syntax at least then if it fails it’ll fail a bit quicker.

Will go and make some tweaks now and see how I get on.

Thanks again!

1 Like

Actually although the prebuild seems to be running once it’s finished if I look at the two-terminal tabs which should have ran during prebuild I can see that they are still running which is why the “Application” task fails :frowning:

Is there perhaps a timeout on prebuild tasks? When it runs manually it passes fine. It also seems to work when I change “init” to “prebuild” but that approach is deprecated.

Finally got it working, ended up moving it to a bash script which worked perfectly for some reason… I was likely doing something silly but it’s working now. I think I prefer bash scripts anyway, at least I can manually test them without having to do a prebuild every time.
Thanks again for your assistance

1 Like

Yes indeed (sorry that this information isn’t more visible) – Prebuilds can run up to one hour (60 min), after which they time out. I also noticed at least two timed-out prebuilds when looking up your prebuilds in Gitpod’s database, so you were definitely affected by this.

We’re considering extending the default prebuild timeout for projects that need a long time to bootstrap/build (e.g. as part of a new feature called “Incremental Prebuilds”, where you can have one potentially very long base prebuild, and then smaller incremental prebuilds that re-use the base prebuild and simply “refresh” your workspace for the latest commits) – but the longer base timeout unfortunately isn’t implemented yet.

Awesome that you got it working! :tada:

And that’s actually a valuable learning – I agree that Gitpod’s YAML tasks syntax is a bit awkward to work with, especially for the more complicated dev setups. I was trying to think about a simpler YAML syntax for tasks, but in fact, writing a Bash script and calling it from .gitpod.yml is probably the best / simplest / most direct solution to this problem. :100: