The Bitbucket Pipelines tool allows your team to automate the software development process by seamlessly integrating and delivering it. The importance of automation in the software development process has never been greater.

By automating your team’s code, you save time by doing less manual work, and you lower risk by following a consistent and repeatable process. In the end, you spend less time handling emergency situations and more time delivering quality code quickly.

In this blog, you’ll learn how Bitbucket Pipeline Trigger activates manual pipelines and schedules them.

Table of Contents

What Are Bitbucket Pipelines?

Pipelines are an essential feature of Bitbucket. This application provides developers with a safe and flexible environment for automatically creating, testing, and deploying code based on a configuration file in their repository. The system can be configured precisely to your requirements.

The service is referred to as an integrated Continous Integration/Continous Deployment service. Continuous Integration (CI) – The process of integrating code frequently into a single repository, ideally several times a day. Once the integration has been verified, it can then be tested using an automated build. Continuous Deployment (CD) – refers to a strategy for a software release, wherein any commitment to the code that passes a test is automatically thrust into production, making changes visible to the users.

Quite simply, Bitbucket pipeline services provide teams with best practices and methodologies for achieving their work goals while maintaining security and code quality.

How do Bitbucket Pipelines Work?

It’s no secret that Bitbucket Pipelines is a set of tools for developers within Bitbucket. Let’s explore a bit more.

As a CI/CD service, Bitbucket Pipelines provides developers with the ability to automatically build and test their code. Within the cloud, containers are created, and commands can be executed from within them. It provides developers with the ability to run unit tests on all changes made to the repository. 

This makes it easier to ensure that your code meets your requirements and is safe.

Bitbucket Pipeline Trigger - Homepage
Image Source

By using Bitbucket Pipelines, you can be assured that your tests are being scaled appropriately, as the pipeline runs in tandem with each commit, creating a new Docker image. You won’t be limited by the power of your hardware, and your pipelines will grow as your requirements do. 

In addition to a simple setup with ready-to-use templates, Bitbucket Pipelines are an outstanding value. Consider this example: You want to get up and running as quickly as you can but do not wish to bother with the setup of build agents or integration with CI tools – after all, it would consume time that would otherwise be spent coding. With Bitbucket Pipelines, you can start working right away without having to set up anything beforehand; you don’t need to switch between different tools.

In addition, with Bitbucket, the entire process of developing a project is managed within its own cloud, providing a quick feedback loop. The entire process is handled within the cloud, from coding to deployment. You can see exactly where a command broke your build on all commits, pull requests, and branches.

If you’re looking for a reliable solution to simplify your software development process, this is an excellent option.

Simplify ETL Using Hevo’s No-Code Data Pipeline

Hevo Data, a Fully-managed Data Pipeline platform, can help you automate, simplify & enrich your data replication process in a few clicks. With Hevo’s wide variety of connectors and blazing-fast Data Pipelines, you can extract & load data from 100+ Data Sources straight into Data Warehouses, or any Databases. To further streamline and prepare your data for analysis, you can process and enrich raw granular data using Hevo’s robust & built-in Transformation Layer without writing a single line of code!

GET STARTED WITH HEVO FOR FREE

Hevo is the fastest, easiest, and most reliable data replication platform that will save your engineering bandwidth and time multifold. Try our 14-day full access free trial today to experience an entirely automated hassle-free Data Replication!

How to get started with Bitbucket Pipelines?

Bitbucket Pipelines is a CI/CD solution that is incorporated into Bitbucket. It enables you to create, test, and even deploy your code automatically depending on a configuration file in your repository.

Understand the YAML file

YAML files called bitbucket-pipelines.yml are used to define pipelines. The files are located in the root folder of your repository. To learn more about configuring a YAML file, see Configure bitbucket-pipelines.yml.

Configuring your first pipeline

Bitbucket provides two ways to configure your pipeline: either directly through the YAML file or by using the wizard. Follow these steps to configure your pipeline.

Requirements

  • An account on Bitbucket Cloud is required.
  • At least one repository must be present in your workspace.

Steps

  1. Select Pipelines in your repository in Bitbucket.
  2. By clicking on Create your first pipeline, you will be directed to the template section.
  3. There are several templates available. Use the one RECOMMENDED if you are not sure.
Bitbucket Pipeline Trigger Template selection
Image Source

4. You can configure your pipeline once you choose a template by using the YAML editor.

Bitbucket Pipeline Trigger - first pipeline
Image Source 

How to change a template?

Changing the template is as easy as opening a dropdown and selecting another one. It is important to remember that choosing a new template overrides the existing content.

Bitbucket Pipeline Trigger - Starter template
Image Source 

How to add more steps?

Adding more steps is easy. You can copy the code snippets in the steps panel by hovering over the options and adding them to the editor.

Bitbucket Pipeline Trigger - Adding more steps
Image Source

How to add Pipes?

Use a custom pipeline if you just want to execute a pipeline manually. Third-party tools can be easily integrated with pipelines.

Using a pipe is as simple as selecting the pipe you want to use, copying it, and pasting it into the editor. The full list of pipes is available by clicking Explore more pipes.

Bitbucket Pipeline Trigger - Adding pipes
Image Source
What makes Hevo’s ETL Process Best-In-Class

Providing a high-quality ETL solution can be a difficult task if you have a large volume of data. Hevo’s automated, No-code platform empowers you with everything you need to have for a smooth data replication experience.

Check out what makes Hevo amazing:

  • Fully Managed: Hevo requires no management and maintenance as it is a fully automated platform.
  • Data Transformation: Hevo provides a simple interface to perfect, modify, and enrich the data you want to transfer.
  • Faster Insight Generation: Hevo offers near real-time data replication so you have access to real-time insight generation and faster decision making. 
  • Schema Management: Hevo can automatically detect the schema of the incoming data and map it to the destination schema.
  • Scalable Infrastructure: Hevo has in-built integrations for 100+ sources (with 40+ free sources) that can help you scale your data infrastructure as required.
  • Live Support: Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
Sign up here for a 14-day free trial!

How to add variables?

Custom variables can be defined, which can be used in the YAML file. The name of the variable must be typed in, along with its value; you can encrypt it by clicking the secure box and clicking Add.

Bitbucket Pipeline Trigger - Adding variables
Image Source 

Result

Once you’ve configured your first pipeline, you can always return to the pipeline cog icon to edit your pipeline.

Executing Bitbucket Pipeline Trigger manually

By using Bitbucket Pipeline Trigger manually, you can customize your CI/CD pipeline so that some steps are only executed if they are manually triggered. Deployment steps are perfect for this, as they require manual testing or checks before they run.

You can add a manual step in your bitbucket-pipelines.yml file by adding a trigger: manual above the step.

As pipelines are triggered by commits, the first step cannot be made manually. In the event a Bitbucket Pipeline Trigger manually needs to run manually, you may set up a custom pipeline. The advantage of a custom pipeline is that you can add or change the values of your variables temporarily, for example, to add a version number or to supply a single-use value.

Setting up a Bitbucket Pipeline Trigger manually

It is possible to run an existing Bitbucket Pipeline Trigger manually or schedule it to run against a specific commit.

Use a custom pipeline if you wish to run a pipeline only manually. As soon as a commit is made to a branch, custom pipelines are not automatically executed. Adding a custom pipeline configuration to bitbucket-pipelines.yml will enable you to define a pipeline configuration. Any pipeline that isn’t defined as a custom pipeline will also be run automatically when a commit to the branch is made.

Run a pipeline manually in Bitbucket Cloud using the UI. To trigger it, you’ll need to write permission on the repository.

Steps

  1. In the bitbucket-pipelines.yml file, add a pipeline. In your bitbucket-pipelines.yml file, you can manually trigger builds for pipeline build configurations.

Example 

#Bitbucket Pipeline Trigger
pipelines:
  custom: # Bitbucket Pipeline Trigger that can only be triggered manually
    sonar:
      - step:
          script:
            - echo "Manual triggers for Sonar are awesome!"
    deployment-to-prod:
      - step:
          script:
            - echo "Manual triggers for deployments are awesome!"
  branches:  # Bitbucket Pipeline Trigger that run automatically on a commit to a branch can also be triggered manually
    staging:
      - step:
          script:
            - echo "Automated pipelines are cool too."
#Bitbucket Pipeline Trigger

How does Bitbucket Pipeline Trigger work?

The Bitbucket Cloud interface allows users to trigger pipelines manually from either the Branches or Commits views.

1) How to run a pipeline manually from the Branches view?

  1. Go to Branches in Bitbucket and select a repo.
  2. Choose the branch for which a pipeline should be run.
  3. Choose Run pipeline for a branch from the (…) menu.
  4. Select a pipeline and click Run:

2) How to run a pipeline manually from the Commits view?

  1. Go to Commits in Bitbucket and choose a repository.
  2. For a commit, open the Commits view.
  3. Choose a commit hash.
  4. Choose Run pipeline.
  5. Click Run after selecting a pipeline:

Conclusion

Just now, you saw how simple it is to set up and use Bitbucket Pipeline Trigger. But just like with anything, it’s crucial that you fully grasp its features so you can reduce your stress and save time. Using Pipelines, you can deploy to Test automatically after every commit in the main branch. 

Even though Bitbucket is a trusted and secure platform, security issues can arise all the time. To protect your repository, it is best to require two-factor authentication for each contributor account as well as to ensure that all laptops and other devices connected to your repository are adequately secured. In an era of cybercrime, you don’t want to create opportunities for it.

To become more and more efficient in handling your Databases, it is preferable for you to integrate them with a solution that you can carry out Data Integration and Management procedures for you without much ado and that is where Hevo Data, a Cloud-based ETL Tool, comes in. Hevo Data supports 100+ Data Sources and helps you transfer your data from these sources to your Data Warehouses in a matter of minutes, all this, without writing any code!

Visit our Website to Explore Hevo

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. Hevo offers plans & pricing for different use cases and business needs, check them out!

Share your experience of learning Bitbucket Pipeline Trigger in the comments section below!

Samuel Salimon
Freelance Technical Content Writer, Hevo Data

Samuel specializes in freelance writing within the data industry, adeptly crafting informative and engaging content centered on data science by merging his problem-solving skills.

No-Code Data Pipeline For Your Data Warehouse

Get Started with Hevo