How to Create Tekton CI CD Pipelines With Kubernetes?

Last Modified: October 19th, 2023

Tekton CI CD- Featured Image

Cloud-based software development has revolutionized application development and deployment. Businesses are now able to use server environments that are easier to start, faster to scale, and cheaper than actual hardware. This shift has also endowed us with automation around application deployment. Software engineering teams use automation processes to help with their application delivery. This is usually referred to as Continuous Integration & Continuous Delivery (CD/CD). With faster delivery and deployment technologies, comes quick feedback, which becomes the goal of agile methodologies.

Tekton is a powerful yet flexible cloud-native tool to build and run Tekton CI CD Pipelines on Kubernetes. Its main goal is to provide a cloud-native standard set of building elements for CI CD systems and make the process of building, testing, and packaging your source code a lot easier and faster. One of the greatest advantages of using Tekton is that it is based on Kubernetes, and it can run on any platform, language, or cloud.

This guide aims to explain to you all the basics of Continuous Integration (CI), and Continuous Delivery (CD) systems. You will learn about some of the basic concepts that are used in Tekton: the terminology, components, and pipeline architecture. By the end of this tutorial, you will have a good understanding of the Tekton CI CD Tasks and Pipelines which can be used to build your own CI CD pipelines with Tekton.

Table of Contents

What Is Tekton?

Tekton is a Kubernetes-native open source framework for creating Continuous Integration and Continuous Delivery (CI/CD) systems. It is highly optimized for building, testing, and deploying applications across multiple cloud providers or hybrid environments. 

Tekton is simple to use since it abstracts away the complex Kubernetes concepts and implementation details. It comes with a number of Kubernetes custom resources to declare and use Tekton CI CD Pipelines. We’ll cover more on this in the upcoming sections.

One of the greatest benefits of using Tekton is that Tekton standardizes Continuous Delivery Tooling and Processes across many vendors, languages, and deployment environments. It works well with Jenkins, Jenkins X, Skaffold, Knative, GitHub Actions, Argo CD and many other popular Continuous Delivery Tools. This helps your teams to create pipelines and standardize software releases with ease.

Tekton is simple to use, and it can be reused to suit your team’s workflow. It provides a scalable and serverless cloud-native execution out of the box, so there’s less preparation time and more action time for your teams. You can find more information about Tekton in this Tekton Git repository here- Tekton CD GitHub

What Is Continuous Integration and Continuous Delivery/Deployment?

Continuous Integration (CI)

Software development teams spend huge portions of their time developing and bettering their applications. And for the most part, these applications remain suspended in an unusable state until the whole development process is complete. Some long-running projects require deferred development and lengthy integration phases, which in turn elongates the software deployment time. 

Continuous Integration (CI) is an “all-time” integration process where every software change is automatically built, tested, and deployed to production. The goal here is to keep the software operational all the time. Any bug or malfunction in your application can be traced back easily and fixed immediately, hence reducing the overall time to validate and release your new software updates.

Application development using Continuous Integration happens through smaller commits and smaller code changes. These commits can occur at regular intervals, for example at minimum once a day. Every time someone decides to push a change to the build server, they have to ensure that their code on the local host is merged so that the change doesn’t break the build and there remains integrity.

Continuous Delivery (CD)

Continuous Delivery (CD) is a process of automating software releases to selected environments like production, development, and testing. It uses Continues Integration to automate software delivery and ensure that software is always ready for production, even if the team decides not to. 

Continuous Delivery checks your software using automated tests and quality checks, to ensure you always have a production-ready environment for your applications. You can either automate this process completely or decide to partially automate it, with manual steps at critical points. This practice offers several advantages like reduced deployment risk, lower costs, and faster feedback.

To help you understand how CI and CD unite in a software development cycle, here’s a pictorial representation to represent their relationship.

Continuous Integration (CI)/Continuous Delivery (CD) Pipeline

A Continuous Integration (CI)/Continuous Delivery (CD) Pipeline or CI/CD Pipeline is a series of actionable steps to deliver a new software product/release. It consists of various stages to compile code, run tests, and analyze codes. Using CI/CD Pipelines, your teams can easily and safely deploy a new version of the application and ensure quality code delivery. 

A characteristic feature of CI/CD Pipelines is automation to perform quality control, and assess everything from performance to API usage and security. This minimizes human error and ensures a consistent procedure throughout the software development process. 

Tekton CI CD Pipelines are of immense utility to software developers and DevOps engineers, since they ensure version control of infrastructure and provide easier rollbacks from application corruption. They also support advance deployment patterns like rolling, blue/green, canary deployment, or GitOps workflow to help you deploy, monitor and check your applications in different ways. 

In the next section, we delineate on the benefits of Tekton CI/CD Pipelines.

Benefits of Tekton CI CD Pipelines

  • Increased Speed: Using Tekton CI CD Pipelines, your software development teams can reduce time to market time from weeks and months to days or hours. With the offered automation options, developers can both speed up product releases and improve code quality. Small commitments and ongoing expansions allow for more agility and competitiveness.
  • Customizability: Tekton CI CD Pipelines are customizable. They provide a high degree of flexibility for platform engineers and developers who can reengineer the basic building blocks to suit their needs.  
  • Scalability: Tekton lets you add nodes to your cluster in relation to your ever-increasing workload capacity. You don’t need to redefine your resource allocations or any other modifications to scale your Tekton CI CD Pipelines.
  • Expandable: Running short on time? Use pre-made components from the Tekton Catalog to quickly create and modify Tekton CI CD Pipelines.
  • Strong Development Team: Every member of your team can examine, improve and verify the various stages of the Tekton CI CD Pipeline. This promotes strong teamwork and better communication across your entire organization.
  • Standardized: Tekton uses Kubernetes to run your workloads, which is a well-established orchestration system.
  • Less Work in Progress: With rapid feedback loops in place, you and your teams can monitor development progress right from the start. Faster feedbacks help your teams to take timely actions and make product improvements quickly.

Simplify ETL Using Hevo’s No-Code Data Pipeline

Hevo Data, a Fully-managed Data Pipeline platform, can help you automate, simplify & enrich your data replication process in a few clicks. With Hevo’s wide variety of connectors and blazing-fast Data Pipelines, you can extract & load data from 100+ Data Sources straight into Data Warehouses, or any Databases. To further streamline and prepare your data for analysis, you can process and enrich raw granular data using Hevo’s robust & built-in Transformation Layer without writing a single line of code!

GET STARTED WITH HEVO FOR FREE

Hevo is the fastest, easiest, and most reliable data replication platform that will save your engineering bandwidth and time multifold. Try our 14-day full access free trial today to experience an entirely automated hassle-free Data Replication!

Tekton Terminology

Tekton uses a number of Kubernetes Custom Resource Definitions (CRD) to create and run Tekton CI CD Pipelines. Given here is a list of all the technical terms and their explanations in the realm of Tekton CI CD Pipelines.

  • Task: A task is a sequence of commands that either compile a code, run a test, or build & deploy images.
  • Pipeline: A pipeline is an ordered collection of tasks. A Tekton CI CD Pipeline operates by collecting all jobs, linking them using a Directed Acyclic Graph (DAG), and then executing them in succession. This execution flow makes use of a number of Kubernetes pods to ensure that each pod runs successfully as desired. 
  • TaskRun: A TaskRun defines a specific execution of a task
  • PipelineRun: A PipelineRun specifies a single run of your Tekton CI CD Pipeline. Using PipelineRun you can view the status of your Tekton CI CD Pipelines, including the specifics of each task execution.
  • PipelineResource: Tekton CI CD Pipelines have resources associated with them, and they are called PipelineResource. These can be input assets (e.g. Git repo) or output assets  (e.g. image registry) of a Pipeline.
Tekton Pipeline: Tekton CI CD Pipeline | Hevo Data
Image Source: Developer IBM

Tasks are handy for smaller workloads like running a test, linting, or generating a Kaniko cache. A single task runs in a single Kubernetes Pod, uses a single disc, and keeps things simple in general.

Pipeline runs are helpful for complicated workloads like static analysis, as well as testing, developing, and deploying large applications.

Tekton Components

Before we jump right into the steps to create a Tekton CI CD Pipeline, let’s first discuss the different components of Tekton and what use do they serve in executing or building a Tekton CI CD Pipeline.

  • Pipelines: Pipelines define a set of Kubernetes Custom Resources required to build a Tekton CI CD Pipeline. 
  • Triggers: Trigger is an automatic way to execute Tekton Pipelines at the correct time. They execute or instantiate Tekton CI CD Pipelines based on completion of a certain event.
  • CLI: Tekton Command Line Interface (CLI) allows you to interact with Tekton. It is built on top of Kubernetes CLI.
  • Dashboard: Dashboard is a web-based Graphical User Interface (GUI) for Tekton Pipelines that provides information regarding pipeline execution.  
  • Catalog: Tekton catalog is a collection of high-quality, community-contributed Tekton building elements such as Tasks and Pipelines that can be readily used in your Tekton Pipelines. 
  • Hub: Tekton hub is a web-based graphical interface that allows you to access the Tekton catalog.
  • Operator: Tekton operator is a Kubernetes operator pattern that allows you to install, update, and delete Tekton projects from your Kubernetes cluster.

Pre-requisites

Creating and using Tekton CI CD Pipelines requires you to have a running Kubernetes Cluster version 1.21 or above (if you don’t have an existing cluster). While installation and setup, you may require cluster-admin permissions for the current user. To do so, use the following command:

kubectl create clusterrolebinding cluster-admin-binding 
--clusterrole=cluster-admin 
--user=$(gcloud config get-value core/account)

There are three versions of Tekton Pipelines that are available. Choose the one which you would like to install:

For installing Tekton Pipelines and its dependencies to an existing Kubernetes cluster, you can run the kubectly apply command: 

kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml

(You can install previous versions by including “/previous/$VERSION_NUMBER”, e.g. https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.2.0/release.yaml.)

For the nightly release, you can execute:

kubectl apply --filename https://storage.googleapis.com/tekton-releases-nightly/pipeline/latest/release.yaml

There’s also a provision for you to monitor your Tekton Pipelines components until all of the components display a Running status. Here’s the command to do so:

kubectl get pods --namespace tekton-pipelines

Tip: Instead of repeating the kubectl get command, append the —watch parameter to see the component’s status updates in real time. To leave watch mode, use CTRL + C.

You are now ready to create and run Tekton Pipelines. For more information about installing Tekton Pipelines on OpenShift, you can visit the following website – Installing Tekton Pipelines.

What makes Hevo’s ETL Process Best-In-Class

Providing a high-quality ETL solution can be a difficult task if you have a large volume of data. Hevo’s automated, No-code platform empowers you with everything you need to have for a smooth data replication experience.

Check out what makes Hevo amazing:

  • Fully Managed: Hevo requires no management and maintenance as it is a fully automated platform.
  • Data Transformation: Hevo provides a simple interface to perfect, modify, and enrich the data you want to transfer.
  • Faster Insight Generation: Hevo offers near real-time data replication so you have access to real-time insight generation and faster decision making. 
  • Schema Management: Hevo can automatically detect the schema of the incoming data and map it to the destination schema.
  • Scalable Infrastructure: Hevo has in-built integrations for 100+ sources (with 40+ free sources) that can help you scale your data infrastructure as required.
  • Live Support: Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
Sign up here for a 14-day free trial!

Steps to Create Tekton CI CD Pipeline

Tekton Pipelines are defined using YAML code. If you are already familiar with Kubernetes or containers, then creating Tekton CI CD Pipelines using YAML would be an easy job. 

Tekton Tasks 

To define and run your Tekton Pipelines, you first need to define your Tasks. Here we discuss one simple example to get you started with Tekton Tasks. As said earlier, Tasks are basic building blocks of your Tekton Pipelines. They essentially represent a sequence of steps to be performed in order. Each of the steps in a Task is a container inside Task’s Pod, which can be isolated for flexibility and versatility.

Here we create a simple task to add two numbers:

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: add-task
spec:
  params:
    - name: first
      description: the first operand
    - name: second
      description: the second operand
  results:
    - name: sum
      description: the sum of the first and second operand
  steps:
    - name: add
      image: alpine
      env:
        - name: OP1
          value: $(params.first)
        - name: OP2
          value: $(params.second)
      command: ["/bin/sh", "-c"]
      args:
        - echo -n $((${OP1}+${OP2})) | tee $(results.sum.path);

The name of the Task is specified using “name: add-task” and “kind:” determines the type, which is a Tekton Task. The Task may contain several steps which are defined under “steps:” and  “args” outputs an echo command to the console, which is nothing but the sum of two numbers. 

Here’s another Task to clone a Git repository.

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
- name: git-clone
  params:
    - name: url
      value: 'https://github.com/piomin/sample-spring-kotlin-microservice.git'
    - name: revision
      value: master
  taskRef:
    kind: ClusterTask
    name: git-clone
  workspaces:
    - name: output
      workspace: source-dir

You can also add Workspaces to mount and share filesystems across Tasks. This feature works analogous to Runners with GitHub Actions. The filesystem could typically hold a clone GitHub repostiory, a ConfigMap or a secret. More information about Tekton Workspaces can be found here

In this example we create a file called susanmessage.txt in a workspace specified by “name: susan-task-workspace”.

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: first-workspace-task
spec:
  workspaces:
    - name: susan-task-workspace
      description: Workspace for sharing files between tasks
  steps:
    - name: start-step
      image: alpine:3.12.1
      securityContext:
        runAsUser: 0
      command: ["/bin/sh"]
      args:
        - -c
        - |
            echo "Starting initial task"
            echo "Good Morning Susan! Welcome aboard." > $(workspaces.task-ws.path)/susanmessage.txt
            ls -la $(workspaces.task-ws.path)
            echo "Ending initial task"

Tekton Pipelines

Now that you’ve defined your Tekton Task(s), it’s time to write your first Tekton Pipeline. Tasks can do some powerful stuff on their own, but they are still somewhat limited. What happens if you want to perform more than one operation? This is where Tekton Pipelines will come into play.

Tekton Pipelines are a collection of tasks that are designed to produce an output from your inputs. Extending on our previous example of summing up two numbers, we now create a Tekton Pipeline to add three numbers.

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: add-three-numbers
spec:
  params:
    - name: first
      description: the first operand
    - name: second
      description: the second operand
    - name: third
      description: the third operand
  tasks:
    - name: first-add
      taskRef:
        name: add-task
      params:
        - name: first
          value: $(params.first)
        - name: second
          value: $(params.second)
    - name: second-add
      taskRef:
        name: add-task
      params:
        - name: first
          value: $(tasks.first-add.results.sum)
        - name: second
          value: $(params.third)
  results:
    - name: sum
      description: the sum of all three operands
      value: $(tasks.second-add.results.sum)
    - name: partial-sum
      description: the sum of first two operands
      value: $(tasks.first-add.results.sum)
    - name: all-sum
      description: the sum of everything
      value: $(tasks.second-add.results.sum)-$(tasks.first-add.results.sum)

For building Tekton CI CD Pipelines, parameters or “params” are extremely important. These properties are usually constructed of a name or type, but can also accept description or default values,

Here’s another Tekton Pipeline that gets the application’s source code from GitHub and then builds and deploys it on OpenShift.

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: githhub-to-openshift-build-and-deploy
spec:
  workspaces:
  - name: common-workspace
  params:
  - name: deployment-name
    type: string
    description: name of the deployment to be patched
  - name: git-url
    type: string
    description: url of the git repo for the code of deployment
  - name: git-revision
    type: string
    description: revision to be used from repo of the code for deployment
    default: pipelines-1.8
  - name: IMAGE
    type: string
    description: image to be build from the code
  tasks:
  - name: fetch-repository
    taskRef:
      name: git-clone
      kind: ClusterTask
    workspaces:
    - name: output
      workspace: common-workspace
    params:
    - name: url
      value: $(params.git-url)
    - name: subdirectory
      value: ""
    - name: deleteExisting
      value: "true"
    - name: revision
      value: $(params.git-revision)
  - name: build-image
    taskRef:
      name: buildah
      kind: ClusterTask
    params:
    - name: IMAGE
      value: $(params.IMAGE)
    workspaces:
    - name: source
      workspace: common-workspace
    runAfter:
    - fetch-repository
  - name: apply-manifests
    taskRef:
      name: apply-manifests
    workspaces:
    - name: source
      workspace: common-workspace
    runAfter:
    - build-image
  - name: update-deployment
    taskRef:
      name: update-deployment
    params:
    - name: deployment
      value: $(params.deployment-name)
    - name: IMAGE
      value: $(params.IMAGE)
    runAfter:
    - apply-manifests

The “runAfter” parameter defines the order of execution for dependent tasks. It tells Tekton Pipeline that a Task must execute after one or more other Tasks. The kind “ClusterTask” used in our code applies the sequence of steps to the entire cluster instead of a single namespace (as in Task). 

Tekton PipelineRun

Lastly, we create a YAML file to run our Tekton CI CD Pipeline. For our example of adding three numbers, here’s what a PipelineRun code should look like:

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: sum-three-pipeline-run
spec:
  pipelineRef:
    name: sum-three-pipeline
  params:
    - name: first
      value: "2"
    - name: second
      value: "10"
    - name: third
      value: "10"

You can similarly define and run Tekton PipelineRuns for our application source code that builds and deploys from GitHub to OpenShift. Included here is one more method to start a Pipeline, which uses tkn (Tekton CLI):

$ tkn pipeline start githhub-to-openshift-build-and-deploy 
    -w name=common-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.7/01_pipeline/03_persistent_volume_claim.yaml 
    -p deployment-name=pipelines-vote-api 
    -p git-url=https://github.com/openshift/pipelines-vote-api.git 
    -p IMAGE=image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/pipelines-vote-api 
    --use-param-defaults

Conclusion

Congratulations! You have successfully created your first cloud-native Tekton CI CD Pipeline. In this guide, we demonstrated how to organize, install, and configure a Tekton CI/CD Pipeline for your Kubernetes Cluster. We also briefed you on the various Tekton components and concepts used in Tekton CI CD systems along with the benefits of using them. 

Hevo Data, a No-code Data Pipeline Platform, provides you with a consistent and reliable solution to manage data transfer from 100+ Data Sources (40+ Free Sources) like MySQL, Google Analytics 360, Salesforce, Zendesk, Segment, etc. to your desired destination like a Data Warehouse or Database in a hassle-free format. 

Visit our Website to Explore Hevo

Hevo can migrate your data to Amazon Redshift, Firebolt, Snowflake, Google BigQuery, PostgreSQL, Databricks, etc. with just a few simple clicks. Not only does Hevo export your data & load it to the destination, but also it transforms & enriches your data to make it analysis-ready, so you can readily analyze your data in your BI Tools. Hevo also allows the integration of data from Non-native Data Sources using Hevo’s in-built Webhooks Connector.

Why not take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You may also have a look at the unbeatable pricing, which will assist you in selecting the best plan for your requirements.

Thank you for reading.

Divyansh Sharma
Former Content Manager, Hevo Data

With a background in marketing research and campaign management at Hevo Data and myHQ Workspaces, Divyansh specializes in data analysis for optimizing marketing strategies. He has experience writing articles on diverse topics such as data integration and infrastructure by collaborating with thought leaders in the industry.

No Code Data Pipeline For Your Data Warehouse