The current landscape of Kubernetes for CI/CD leaves us with a lot of new options since the original Kubernetes Runtime RFC (#2) and it makes sense for the Concourse team + community to have a discussion around how we can best leverage K8s for Concourse workloads.
🚂
Regardless of how we choose to move forward leveraging K8s, there's some common concerns which we'll need to address to make the K8s runtime a reality ( whether we use Tekton or roll our own solution leveraging K8s primatives )
tldr: Does the proposed K8s runtime schedule individual steps, builds (with many steps), or a whole pipeline at once?
What is the smallest unit of exectution we can / want to leverage in the K8s runtime to ensure we're optimially leveraging the features of Kubernetes and addressing any technical challenges around volume streaming between steps of a build.
tldr: How does the proposed runtime support existing task image behaviour in Concourse?
There are currently many different ways in which images are fetched or specified by users to define the RootFS for a task step or custom resource. Users can provide an image_resource
which uses a resource get
operation to fetch the image RootFS into a volume, but they can also use an output from another step in the pipeline as the image:
for a task.
tldr: Where does the existing behaviour of using Bagggageclaim to cache volumes (resource caches, task caches, etc) fit in, given we cannot control Volumes' lifecycle outside of a pod / deployment ?
PipelineRun
supports using a Persistent Volume Claim or a GCS bucket to share artifacts between tasks.
Using K8s Primitives, or some mix of K8s primatives and custom CRDs
One big 'ol Pod for a running build's steps? Jobs for build steps?
Separate Baggageclaim component deployed to support this? One Baggageclaim per node using anti-affinity? How do we mounty volumes to the containers of a build step.
https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/