How to Setup Bitbucket Pipelines?: Made Easy


Bitbucket Pipelines | Hevo Data

Bitbucket is one of the industry-leading repository management solutions that allow developers to seamlessly implement open DevOps tasks.

Bitbucket offers a variety of services to developers, such as allowing teams to collaborate and create projects, as well as test and deploy code in a single platform. One such effective service of Bitbucket is Bitbucket pipelines, which allows developers to implement continuous integration and delivery operations, thereby empowering teams to build, test, and deploy codes within the Bitbucket environment.

In this article, you will learn about Bitbucket, the features of Bitbucket, Bitbucket pipelines, and how to set up Bitbucket Pipelines.

Table of Contents


A fundamental understanding of creating data pipelines.

What is Bitbucket?

Bitbucket Pipelines: logo | Hevo Data
Image Source

Owned by Atlassian in 2010, BitBucket — a cloud-based service — allows developers to store and manage their code, as well as monitor and control code changes. In other words, Bitbucket is a Git repository management system specifically designed for professional teams to implement open DevOps operations. It provides a centralized location for managing git repositories, collaborating on source code, and guiding you through the development flow. Bitbucket functionalities include the ability to restrict access to the source code, project workflow, pull requests for code review, and, most importantly, integration with Jira for traceability.

Bitbucket provides users with three offerings: Bitbucket Cloud, Bitbucket Server, and Bitbucket Data Center. Bitbucket Cloud is hosted on the exclusive Atlassian server and can be accessed through a URL, while Bitbucket Server is hosted on the on-premises environment. Bitbucket Data Center (our Enterprise offering) appears to users as a single instance of Bitbucket Server, but it is actually hosted on a cluster of servers in your environment. This has several advantages over Bitbucket Server, like performance, scalability, high availability, and smart mirroring.

Scale your data integration effortlessly with Hevo’s Fault-Tolerant No Code Data Pipeline

As the ability of businesses to collect data explodes, data teams have a crucial role to play in fueling data-driven decisions. Yet, they struggle to consolidate the data scattered across sources into their warehouse to build a single source of truth. Broken pipelines, data quality issues, bugs and errors, and lack of control and visibility over the data flow make data integration a nightmare.

1000+ data teams rely on Hevo’s Data Pipeline Platform to integrate data from over 150+ sources in a matter of minutes. Billions of data events from sources as varied as SaaS apps, Databases, File Storage and Streaming sources can be replicated in near real-time with Hevo’s fault-tolerant architecture. What’s more – Hevo puts complete control in the hands of data teams with intuitive dashboards for pipeline monitoring, auto-schema management, and custom ingestion/loading schedules. 

All of this combined with transparent pricing and 24×7 support makes us the most loved data pipeline software on review sites.

Take our 14-day free trial to experience a better way to manage data pipelines.

Key Features of Bitbucket

  • JIRA integration: JIRA is one of the greatest tools for tracking bugs in code. It is very straightforward to integrate Bitbucket and JIRA for tracking and managing bugs. As a result, a user can freely track the status of an issue or a bug report without leaving the current tool.
  • Built-in Issue Tracker: BitBucket’s built-in system makes it simple to track issues. This tracker is adaptable and simple to use, with a variety of configurable fields such as version, milestone, and so on. The built-in issue tracker can also assist us in tracking the status of bugs, new feature requests from the client/developer, and tasks.
  • Code Review System: Bitbucket has a very fast code review system that allows developers and reviewers to review pull requests in a relatively short time. It has a unique commit-level evaluation system that allows users to easily check the updated code. In addition, Bitbucket also allows multiple reviewers to contribute to the code review process.
  • In-line Discussion: In-line Discussion can be used to insert in-line comments and threaded conversations into a code snippet. As a result, the interactions between reviewers and developers will be improved, thereby making the code effective and bug-free. For example, if a reviewer suggests a font style change, it will be viewable near the code segment.

What are Bitbucket Pipelines?

Bitbucket Pipelines are a CI/CD service that is built into Bitbucket. It enables you to build, test, and even deploy your code automatically based on a configuration file in your existing repository. In addition, Bitbucket builds containers in the cloud where you can run commands inside these containers, just like you would on a local machine, but with all the benefits of a new system, customized and configured for your needs.

Bitbucket Pipelines also allow you to configure and execute specific actions on your repositories whenever you push code to the origin. You can run tests, builds, and even SSH into our production servers to move code or restart processes while being wired up with messaging hooks to remain updated while Pipelines handles everything.

What are the Steps to set up Bitbucket Pipelines?

Before you begin setting up Bitbucket pipelines, you have to ensure that you already have a pre-configured Bitbucket cloud account and at least one repository in your Bitbucket workspace. You can configure your Bitbucket pipelines in two ways: writing the YAML file directly or using the Bitbucket UI wizard. The steps given below will help you through the Pipelines configuration wizard.

  • Initially, go to your Bitbucket account and click on Pipelines in the left side panel. 
  • Click on “Create your first pipeline.”
  • Now, you are asked to choose an appropriate template in the template section. The template comes with a number of pre-configured use cases for applications, microservices, mobile IaaS, and serverless development. Bitbucket is compatible with major cloud providers such as AWS, Azure, and Google Cloud Platform, as well as the most popular programming languages such as NodeJS, PHP, Java, Python, and .NET Core. However, based on the language configured in your Bitbucket repository, the template description automatically suggests templates in that language.
Bitbucket Pipelines: select template | Hevo Data
Image source
  • Select one of the templates available in the template section. If you’re not sure about the template options, go with the RECOMMENDED option.
  • After selecting a template, you will be taken to the YAML editor, where you can configure your Bitbucket pipeline.
  • Your Bitbucket pipelines build configuration is defined in the built-in bitbucket pipelines.yml file that is located at the root of your repository.
Bitbucket Pipelines: configure pipeline | Hevo Data
Image Source
  • You can write scripts to develop and deploy your projects and configure caches to speed up builds with a basic pipeline configuration. You can also define different images for each stage to manage various dependencies across actions in your Bitbucket pipeline. A pipeline is composed of a series of steps, and multiple pipelines can be defined in the configuration file. A pipeline has to be configured under the default section, and multiple sections of the pipeline configuration file can be identified by specific keywords, as shown in the above image.
  • The configuration file must include at least one pipeline section, which must have at least one step and one script within the step. There must be 4 GB of memory available for each step. A pipeline can contain up to 100 steps, and each step in your pipeline should run in its own Docker container. However, you can use different types of containers for each step by selecting different images.
Bitbucket Pipelines: script | Hevo Data
Image Source
  • Now, configure your Bitbucket pipeline in the YAML editor, as shown in the above image.

How to Change Template in Bitbucket Pipelines?

You can change the template at any time to change the use case by opening the dropdown menu and selecting another template. Take into account that if you select a new template, the existing content will be overwritten.

Bitbucket Pipelines: change template | Hevo Data
Image Source

How to Add more Steps in Bitbucket Pipelines?

Now, you can also add pipes to your pipeline configuration. Pipes make it simple to configure a pipeline with a variety of third-party tools like AWS, Firebase, and SonarCloud. 

Bitbucket Pipelines: add more steps | Hevo Data
Image Source

How to Add Pipes to Bitbucket Pipelines?

To include pipes in your Bitbucket pipeline, simply select the pipe you want to use, copy the code snippet of the pipe, and paste it into the editor. There are dozens of pipes available in Bitbucket, and you can see the entire list by clicking Explore more pipes, as shown in the above image.

There are two ways to add pipes to your pipeline. One is using the online editor or directly editing the configuration file. When you are using the online editor to add pipes, open up your bitbucket-pipelines.yml file in the editor and select the pipe you need to add to the Bitbucket pipeline. Then, copy the pipe and paste it into the script section of your step. Then, add your specific values in single quotes and un-comment any optional variables you want to use. After adding pipe, your pipeline is all set to execute in Bitbucket.

If you are about to edit the configuration directly for adding pipes, you can add the task details to your bitbucket-pipelines.yml file using your preferred editor. The file in each of the available pipes contains instructions on how to use the pipe as well as lines that you can copy and paste into your bitbucket-pipelines.yml file. While you are in the pipe repo, take a look at the scripts to see what the pipe is up to behind the scenes.

Bitbucket Pipelines: add steps | Hevo Data
Image Source

How to Add Variables to Bitbucket Pipelines?

You can also define and use custom variables in the YAML file. Fill in the variable’s name and value, and check the secured box if you want to encrypt it. After adding custom variables, click the Add button, as shown in the image above.

Bitbucket Pipelines: add variables | Hevo Data
Image Source

By following the above-mentioned steps, you have successfully configured your Bitbucket pipeline. However, you can always go back to the YAML editor by clicking the pipeline cog icon to edit or customize the pipeline stages.

All of the capabilities, none of the firefighting

Using manual scripts and custom code to move data into the warehouse is cumbersome. Frequent breakages, pipeline errors, and lack of data flow monitoring make scaling such a system a nightmare. Hevo’s reliable data pipeline platform enables you to set up zero-code and zero-maintenance data pipelines that just work.

  • Reliability at Scale: With Hevo, you get a world-class fault-tolerant architecture that scales with zero data loss and low latency. 
  • Monitoring and Observability: Monitor pipeline health with intuitive dashboards that reveal every stat of the pipeline and data flow. Bring real-time visibility into your ELT with Alerts and Activity Logs 
  • Stay in Total Control: When automation isn’t enough, Hevo offers flexibility – data ingestion modes, ingestion, and load frequency, JSON parsing, destination workbench, custom schema management, and much more – for you to have total control.    
  • Auto-Schema Management: Correcting improper schema after the data is loaded into your warehouse is challenging. Hevo automatically maps source schema with destination warehouse so that you don’t face the pain of schema errors.
  • 24×7 Customer Support: With Hevo you get more than just a platform, you get a partner for your pipelines. Discover peace with round the clock “Live Chat” within the platform. What’s more, you get 24×7 support even during the 14-day full-feature free trial.
  • Transparent Pricing: Say goodbye to complex and hidden pricing models. Hevo’s Transparent Pricing brings complete visibility to your ELT spend. Choose a plan based on your business needs. Stay in control with spend alerts and configurable credit limits for unforeseen spikes in data flow. 


In this article, you learned about Bitbucket, the features of Bitbucket, Bitbucket pipelines, Bitbucket pipeline examples and how to set up Bitbucket pipelines. This article outlined the basic ways to create pipes, variables, and steps. However, you can further explore and learn the step-by-step procedures to create pipes and customize the YAML configuration files to build more effective pipelines for different use cases.

Hevo Data, a No-code Data Pipeline Platform, provides you with a consistent and reliable solution to manage data transfer from 150+ Data Sources (40+ Free Sources) to your desired destination like a Data Warehouse or Database hassle-free. 

Visit our Website to Explore Hevo

Hevo can ETL your Jira Data to Amazon Redshift, Firebolt, Snowflake, Google BigQuery, PostgreSQL, Databricks, etc. with just a few simple clicks. Not only does Hevo export your data & load it to the destination, but it also transforms & enriches your data to make it analysis-ready, so you can readily analyze your data in your BI Tools.

Why not take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite firsthand. You may also have a look at the unbeatable pricing, which will assist you in selecting the best plan for your requirements.

Thank you for sticking along and reading our blog. If you have any queries concerning Bitbucket Pipelines, please leave your message in the comments section below.

Freelance Technical Content Writer, Hevo Data

Ishwarya has experience working with B2B SaaS companies in the data industry and her passion for data science drives her to product informative content to aid individuals in comprehending the intricacies of data integration and analysis.

No Code Data Pipeline For Your Data Warehouse