The use of modern and advanced applications in daily business activities also increased the demand for Continous Integration and development. In a fast-paced environment, companies require DevOps tools to optimize the workflow. There are many DevOps tools available in the market such as GitLab that help developers optimize their software development and workflow as well.

In GitLab, Data Pipelines are integral for prompt integration and deployment. These are characterized by jobs and stages that dictate what task is done and when. Pipelines can be configured in several different ways. Running pipelines as basic, merge request, multi-project parent-child have their distinct benefits. 

In this article, you will explore how you can use Scheduled Pipelines to run CI/CD pipelines at uniform intervals. You will explore more aspects of a GitLab Scheduled Pipeline, how to create and run a scheduled pipeline, and steps to update the ownership of the pipeline scheduled for a project.

What is GitLab? 

GitLab Scheduled Pipeline: GitLab Logo | Hevo Data
Image Source

GitLab is a DevOps software that allows Developers to develop, secure, and operate software in a single application. It helps in creating a streamlined software workflow. It started as an open-source project in that ley teams collaborate on software development, manages the lifecycle, etc. GitLab’s goal is to provide a common platform where each team member can directly impact the company roadmap. 

GitLab has more than 3,000 contributors that regularly manage and upgrade GitLab. GitLab comes with a community edition which is open source and free to use. Also, it provides an Enterprise edition that has some additional features. It provides a server that manages Git repositories which helps in simplifying the administration tasks.

GitLab has more than 100,000 users and is widely used by big organizations such as IBM, Sony, Goldman Sachs, and NASA. GitLab helps developers and other technical teams to reduce the complexity and accelerate DevOps. With the help of GitLab, users can reduce the product lifecycles and increase productivity.

Key Features of GitLab

Some of the main features of GitLab are listed below:

  • Activity Stream: GitLab allows users to view a list of the latest commits, merges, comments, and team members on your project.  
  • Powerful Branching: Git has a branch that contains all the history that can be created, moved, or shared instantly.
  • Auto DevOps: GitLab auto-configure software development lifecycle by default. It builds, detects, tests deploys, and monitors applications.
  • Container Scanning: GitLab allows users to run security scans to ensure that the Docker images don’t have any vulnerability in the environment.
  • Package Management: GitLab comes with built-in packages such as Maven, C++, npm, etc enabling teams to create a consistent and dependable software supply chain.

To learn more about GitLab, click here.

Here’s Why You Should Give Hevo A Try!

Hevo, a No-code Data Pipeline can help you Extract, Transform, and Load data from a plethora of Data Sources to a Data Warehouse of your choice — without having to write a single line of code. Hevo offers an auto-schema mapper that automates the process of migrating, loading, or integrating from 100+ supported connectors.

Get Started with Hevo for Free

Hevo is the fastest, easiest, and most reliable data replication platform that will save your engineering bandwidth and time multifold. Try our 14-day full access free trial today to experience an entirely automated hassle-free Data Replication!

What is GitLab Scheduled Pipeline?

GitLab Scheduled Pipeline allows initiating pipeline triggers periodically as per a predefined schedule. These allow designing strategies and optimizing performing jobs and rendering stages as necessary. There are some typical cases where jobs processed through a GitLab Scheduled Pipeline can be very beneficial.

These include deploying artifacts, maintenance jobs as well as testing jobs. To run a GitLab Scheduled Pipeline, two primary prerequisites exist. Firstly, the schedule owner must have the developer role (taking and changing ownership of the scheduled pipeline is discussed later in this post). The second prerequisite is a valid CI/CD configuration. The basic concepts of the CI/CD configuration are illustrated as shown:

GitLab Scheduled Pipeline: CI/CD Configuration | Hevo Data
Image Source

This script can be used to get all pipeline schedules for your project:

[
    {
        "id": 13,
        "description": "Test schedule pipeline",
        "ref": "main",
        "cron": "* * * * *",
        "cron_timezone": "Asia/Tokyo",
        "next_run_at": "2017-05-19T13:41:00.000Z",
        "active": true,
        "created_at": "2017-05-19T13:31:08.849Z",
        "updated_at": "2017-05-19T13:40:17.727Z",
        "owner": {
            "name": "Administrator",
            "username": "root",
            "id": 1,
            "state": "active",
            "avatar_url": "http://www.gravatar.com/avatar/e64c7d89f26bd1972efa854d13d7dd61?s=80&d=identicon",
            "web_url": "https://gitlab.example.com/root"
        }
    }
]

Creating a GitLab Scheduled Pipeline

To begin creating a new GitLab Scheduled Pipeline, you can explore POST/projects/:id/pipeline_schedules.

In GitLab, you can add a pipeline schedule by selecting Menu -> Projects from the top bar and then select “Schedules” from the CI/CD tab. Here you can fill in the “New Schedule” form and proceed to define the CI/CD variables. 

Run the followings crips to define various attributes including the branch tag, cron schedule, time zone, and activation command for the GitLab Scheduled Pipeline:

curl --request POST --header "PRIVATE-TOKEN: <your_access_token>"      --form description="Build packages" --form ref="main" --form cron="0 1 * * 5" --form cron_timezone="UTC" 
     --form active="true" "https://gitlab.example.com/api/v4/projects/29/pipeline_schedules"
{
    "id": 14,
    "description": "Build packages",
    "ref": "main",
    "cron": "0 1 * * 5",
    "cron_timezone": "UTC",
    "next_run_at": "2017-05-26T01:00:00.000Z",
    "active": true,
    "created_at": "2017-05-19T13:43:08.169Z",
    "updated_at": "2017-05-19T13:43:08.169Z",
    "last_pipeline": null,
    "owner": {
        "name": "Administrator",
        "username": "root",
        "id": 1,
        "state": "active",
        "avatar_url": "http://www.gravatar.com/avatar/e64c7d89f26bd1972efa854d13d7dd61?s=80&d=identicon",
        "web_url": "https://gitlab.example.com/root"
    }
}
Here’s What Makes Hevo Unique — and It’s Something New!

Providing a high-quality ETL solution can be a difficult task if you have a large volume of data. Hevo’s automated, No-code platform empowers you with everything you need to have for a smooth data replication experience.

Check out what makes Hevo amazing:

  • Fully Managed: Hevo requires no management and maintenance as it is a fully automated platform.
  • Data Transformation: Hevo provides a simple interface to perfect, modify, and enrich the data you want to transfer.
  • Faster Insight Generation: Hevo offers near real-time data replication so you have access to real-time insight generation and faster decision making. 
  • Schema Management: Hevo can automatically detect the schema of the incoming data and map it to the destination schema.
  • Scalable Infrastructure: Hevo has in-built integrations for 100+ sources (with 40+ free sources) that can help you scale your data infrastructure as required.
  • Live Support: Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.

Want to take Hevo for a spin? Sign Up here for a 14-day free trial and experience the feature-rich Hevo.

Running a GitLab Scheduled Pipeline

You can trigger a scheduled pipeline to run manually by selecting Menu -> Projects then choosing CI/CD -> Schedules and finally selecting “Play” to run the desired pipeline.

The following script can be run to the jobs for a scheduled pipeline:

job:on-schedule:
  rules:
    - if: $CI_PIPELINE_SOURCE == "schedule"
  script:
    - make world
 
job:
  rules:
    - if: $CI_PIPELINE_SOURCE == "push"
  script:
    - make build

Once specified, the rules can be reused as shown to run different jobs:

.default_rules:
  rules:
    - if: $CI_PIPELINE_SOURCE == "schedule"
      when: never
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
 
job1:
  rules:
    - !reference [.default_rules, rules]
  script:
    - echo "This job runs for the default branch, but not schedules."
 
job2:
  rules:
    - !reference [.default_rules, rules]
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
  script:
    - echo "This job runs for the default branch, but not schedules."
    - echo "It also runs for merge requests." 

Taking Ownership of a GitLab Scheduled Pipeline

You need to update the schedule ownership to the developer role to create and run a scheduled pipeline successfully. You can select Menu -> Projects then CI/CD -> Schedules and select “Take Ownership” from the right end of the list of the pipeline concerned.

The following script can be run to explore taking ownership of the pipeline and define the owner’s name, state, and other specifics as required:

curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/29/pipeline_schedules/13/take_ownership"

{
    "id": 13,
    "description": "Test schedule pipeline",
    "ref": "main",
    "cron": "0 2 * * *",
    "cron_timezone": "Asia/Tokyo",
    "next_run_at": "2017-05-19T17:00:00.000Z",
    "active": true,
    "created_at": "2017-05-19T13:31:08.849Z",
    "updated_at": "2017-05-19T13:46:37.468Z",
    "last_pipeline": {
        "id": 332,
        "sha": "0e788619d0b5ec17388dffb973ecd505946156db",
        "ref": "main",
        "status": "pending"
    },
    "owner": {
        "name": "shinya",
        "username": "maeda",
        "id": 50,
        "state": "active",
        "avatar_url": "http://www.gravatar.com/avatar/8ca0a796a679c292e3a11da50f99e801?s=80&d=identicon",
        "web_url": "https://gitlab.example.com/maeda"
    }
}

Benefits of GitLab Scheduled Pipelines

A few advantages of using GitLab Scheduled Pipelines are listed below:

  • Security is essential for a Ci/CD Pipeline. GitLab offers full control over access control and the code storage location. It allows organizations to keep high-security standards.
  • GitLab Scheduled Pipelines can automatically detect, build, test, deploy and monitor using the Auto DevOps feature. This saves time and enforces standard practices to be followed in the project.
  • GitLab Scheduled Pipelines comes with score based feedback system for DevOps that allows users to know how they implement the pipeline.

Conclusion

In this article, you learned about creating and running a GitLab Scheduled Pipeline can be streamlined by running the above-described scripts for initiation, ownership, and other alterations to the scheduled pipeline of jobs. Instead of manually running code for implementing each stage, you can also rely on integrated automation features that can make the task simpler. 

It is essential to store these data streams in Data Warehouses and run Analytics on them to generate insights. Hevo Data is a No-code Data Pipeline solution that helps to transfer data from GitHub and 100+ data sources to desired Data Warehouse. It fully automates the process of transforming and transferring data to a destination without writing a single line of code.

Want to take Hevo for a spin? Sign Up here for a 14-day free trial and experience the feature-rich Hevo suite first hand. Hevo offers plans & pricing for different use cases and business needs!

Share your experience of learning about GitLab Scheduled Pipeline in the comments section below!

Aman Sharma
Technical Content Writer, Hevo Data

Aman Deep Sharma is a data enthusiast with a flair for writing. He holds a B.Tech degree in Information Technology, and his expertise lies in making data analysis approachable and valuable for everyone, from beginners to seasoned professionals. Aman finds joy in breaking down complex topics related to data engineering and integration to help data practitioners solve their day-to-day problems.

No-code Data Pipeline For your Data Warehouse