The use of modern and advanced applications in daily business activities also increased the demand for Continous Integration and development. In a fast-paced environment, companies require DevOps tools to optimize the workflow. There are many DevOps tools available in the market such as GitLab that help developers optimize their software development and workflow as well.
In GitLab, Data Pipelines are integral for prompt integration and deployment. These are characterized by jobs and stages that dictate what task is done and when. Pipelines can be configured in several different ways. Running pipelines as basic, merge request, multi-project parent-child have their distinct benefits.
In this article, you will explore how you can use Scheduled Pipelines to run CI/CD pipelines at uniform intervals. You will explore more aspects of a GitLab Scheduled Pipeline, how to create and run a scheduled pipeline, and steps to update the ownership of the pipeline scheduled for a project.
What is GitLab?
GitLab is a DevOps software that allows Developers to develop, secure, and operate software in a single application. It helps in creating a streamlined software workflow. It started as an open-source project in that many teams collaborated on software development, managed the lifecycle, etc. GitLab’s goal is to provide a common platform where each team member can directly impact the company roadmap. By leveraging GitLab pipeline APIs and triggers, development teams can streamline their workflows, automate processes, and enhance collaboration across the entire lifecycle.
Key Features of GitLab
Some of the main features of GitLab are listed below:
- Activity Stream: GitLab allows users to view a list of the latest commits, merges, comments, and team members on your project.
- Powerful Branching: Git has a branch that contains all the history that can be created, moved, or shared instantly.
- Auto DevOps: GitLab auto-configure software development lifecycle by default. It builds, detects, tests deploys, and monitors applications.
- Container Scanning: GitLab allows users to run security scans to ensure that the Docker images don’t have any vulnerability in the environment.
- Package Management: GitLab comes with built-in packages such as Maven, C++, npm, etc enabling teams to create a consistent and dependable software supply chain.
Looking for the perfect ETL solution for your GitLab data? Hevo stands out among the top tools with its seamless integration and powerful features. Why Choose Hevo?
- Versatile Integration: Supports Gitlab as a source.
- No-Code Platform: Easily set up and manage data pipelines without any coding.
- Real-Time Data Sync: Ensure your data warehouse is always up-to-date with real-time data flows.
Join industry leaders, including Freight Tiger, who rely on Hevo for efficient and reliable data integration from PostgreSQL.
Get Started with Hevo for Free
What is GitLab Scheduled Pipeline?
GitLab Scheduled Pipeline allows initiating pipeline triggers periodically as per a predefined schedule. These allow designing strategies and optimizing performing jobs and rendering stages as necessary. There are some typical cases where jobs processed through a GitLab Scheduled Pipeline can be very beneficial.
These include deploying artifacts, maintenance jobs as well as testing jobs. To run a GitLab Scheduled Pipeline, two primary prerequisites exist. Firstly, the schedule owner must have the developer role (taking and changing ownership of the scheduled pipeline is discussed later in this post). The second prerequisite is a valid CI/CD configuration. The basic concepts of the CI/CD configuration are illustrated as shown:
This script can be used to get all pipeline schedules for your project:
[
{
"id": 13,
"description": "Test schedule pipeline",
"ref": "main",
"cron": "* * * * *",
"cron_timezone": "Asia/Tokyo",
"next_run_at": "2017-05-19T13:41:00.000Z",
"active": true,
"created_at": "2017-05-19T13:31:08.849Z",
"updated_at": "2017-05-19T13:40:17.727Z",
"owner": {
"name": "Administrator",
"username": "root",
"id": 1,
"state": "active",
"avatar_url": "http://www.gravatar.com/avatar/e64c7d89f26bd1972efa854d13d7dd61?s=80&d=identicon",
"web_url": "https://gitlab.example.com/root"
}
}
]
Creating a GitLab Scheduled Pipeline
To begin creating a new GitLab Scheduled Pipeline, you can explore POST/projects/:id/pipeline_schedules.
In GitLab, you can add a pipeline schedule by selecting Menu -> Projects from the top bar and then select “Schedules” from the CI/CD tab. Here you can fill in the “New Schedule” form and proceed to define the CI/CD variables.
Run the followings crips to define various attributes including the branch tag, cron schedule, time zone, and activation command for the GitLab Scheduled Pipeline:
curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" --form description="Build packages" --form ref="main" --form cron="0 1 * * 5" --form cron_timezone="UTC"
--form active="true" "https://gitlab.example.com/api/v4/projects/29/pipeline_schedules"
{
"id": 14,
"description": "Build packages",
"ref": "main",
"cron": "0 1 * * 5",
"cron_timezone": "UTC",
"next_run_at": "2017-05-26T01:00:00.000Z",
"active": true,
"created_at": "2017-05-19T13:43:08.169Z",
"updated_at": "2017-05-19T13:43:08.169Z",
"last_pipeline": null,
"owner": {
"name": "Administrator",
"username": "root",
"id": 1,
"state": "active",
"avatar_url": "http://www.gravatar.com/avatar/e64c7d89f26bd1972efa854d13d7dd61?s=80&d=identicon",
"web_url": "https://gitlab.example.com/root"
}
}
Integrate Gitlab to BigQuery
Integrate Gitlab to Redshift
Integrate Gitlab to Snowflake
Running a GitLab Scheduled Pipeline
You can trigger a scheduled pipeline to run manually by selecting Menu -> Projects then choosing CI/CD -> Schedules and finally selecting “Play” to run the desired pipeline.
The following script can be run to the jobs for a scheduled pipeline:
job:on-schedule:
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
script:
- make world
job:
rules:
- if: $CI_PIPELINE_SOURCE == "push"
script:
- make build
Once specified, the rules can be reused as shown to run different jobs:
.default_rules:
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
when: never
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
job1:
rules:
- !reference [.default_rules, rules]
script:
- echo "This job runs for the default branch, but not schedules."
job2:
rules:
- !reference [.default_rules, rules]
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
script:
- echo "This job runs for the default branch, but not schedules."
- echo "It also runs for merge requests."
Taking Ownership of a GitLab Scheduled Pipeline
You need to update the schedule ownership to the developer role to create and run a scheduled pipeline successfully. You can select Menu -> Projects then CI/CD -> Schedules and select “Take Ownership” from the right end of the list of the pipeline concerned.
The following script can be run to explore taking ownership of the pipeline and define the owner’s name, state, and other specifics as required:
curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/29/pipeline_schedules/13/take_ownership"
{
"id": 13,
"description": "Test schedule pipeline",
"ref": "main",
"cron": "0 2 * * *",
"cron_timezone": "Asia/Tokyo",
"next_run_at": "2017-05-19T17:00:00.000Z",
"active": true,
"created_at": "2017-05-19T13:31:08.849Z",
"updated_at": "2017-05-19T13:46:37.468Z",
"last_pipeline": {
"id": 332,
"sha": "0e788619d0b5ec17388dffb973ecd505946156db",
"ref": "main",
"status": "pending"
},
"owner": {
"name": "shinya",
"username": "maeda",
"id": 50,
"state": "active",
"avatar_url": "http://www.gravatar.com/avatar/8ca0a796a679c292e3a11da50f99e801?s=80&d=identicon",
"web_url": "https://gitlab.example.com/maeda"
}
}
Benefits of GitLab Scheduled Pipelines
A few advantages of using GitLab Scheduled Pipelines are listed below:
- Security is essential for a Ci/CD Pipeline. GitLab offers full control over access control and the code storage location. It allows organizations to keep high-security standards.
- GitLab Scheduled Pipelines can automatically detect, build, test, deploy and monitor using the Auto DevOps feature. This saves time and enforces standard practices to be followed in the project.
- GitLab Scheduled Pipelines comes with score based feedback system for DevOps that allows users to know how they implement the pipeline.
Say Goodbye to Manual Coding with Hevo
No credit card required
Conclusion
In this article, you learned about creating and running a GitLab Scheduled Pipeline can be streamlined by running the above-described scripts for initiation, ownership, and other alterations to the scheduled pipeline of jobs. Instead of manually running code for implementing each stage, you can also rely on integrated automation features that can make the task simpler.
It is essential to store these data streams in Data Warehouses and run Analytics on them to generate insights. Hevo Data is a No-code Data Pipeline solution that helps to transfer data from GitHub and 150+ data sources to desired Data Warehouse. It fully automates the process of transforming and transferring data to a destination without writing a single line of code.
Want to take Hevo for a spin? Sign Up here for a 14-day free trial and experience the feature-rich Hevo suite first hand. Hevo offers plans & pricing for different use cases and business needs!
Share your experience of learning about GitLab Scheduled Pipeline in the comments section below!
FAQs
1. Can I trigger GitLab pipelines using cron syntax?
Yes, GitLab uses cron syntax to schedule pipelines. You can set time intervals like daily, weekly, or monthly, specifying exact times for execution.
2. How can I edit or delete a scheduled pipeline in GitLab?
To edit or delete a scheduled pipeline, go to your project’s “CI / CD” > “Schedules” and either modify the schedule details or remove the schedule entirely.
3. Can I run a scheduled pipeline manually in GitLab?
Yes, you can manually trigger a scheduled pipeline by clicking “Play” next to the scheduled pipeline in the “Schedules” section, overriding the predefined schedule.
Aman Deep Sharma is a data enthusiast with a flair for writing. He holds a B.Tech degree in Information Technology, and his expertise lies in making data analysis approachable and valuable for everyone, from beginners to seasoned professionals. Aman finds joy in breaking down complex topics related to data engineering and integration to help data practitioners solve their day-to-day problems.