Choose wisely between Pipelines, Stages, Jobs and Tasks

When modeling pipelines with Go, always remember that, subject to availability of agents:
  1. Multiple instances of a pipeline can run simultaneously. This is common if we have a pipeline that takes half an hour to complete and there are multiple commits to the pipeline's material during that interval.
  2. Stages of a pipeline-instance will execute in sequence.
  3. Jobs within a stage will be executed in parallel. If you have stuff that must run in sequence, model them as tasks within a job, not as multiple jobs. But you could also distribute these tasks over multiple stages having one job each. Why? For one, it gives better visibility on the dashboard and pipeline activity page. Stage progression is visually depicted. Task progression within a job is only depicted as a overall progress bar (because agents only report back to the server on job completion, not for each task). But perhaps more importantly, this gives you finer-grained re-run ability. It isn't possible to re-run individual tasks, only jobs. So modeling a sequence of activities into a number of single-job stages lets us pick and choose what to re-run thus resulting in faster feedback at a micro level.
  4. Note however that refactoring a multi-task job into multiple jobs means that the tasks may not all run on the same agent. Job is the unit of agent activity - so if you care about agent affinity (e.g. not fetch materials on multiple agents), a single job is the way to go.
On the other hand, do make full of parallelizability. Try to refactor pipelines with many stages into multiple pipelines. Try to partition independent compilation or testing activity into multiple jobs. Go is quite powerful and flexible - make sure you put it to work.