Setting up Looker Jira Integration: 2 Easy Methods

By: Published: May 18, 2021

Setting Up Looker Jira Integration

If your Jira use case includes Business Intelligence or Embedded Analytics then it’s no surprise that you’re probably wondering how you can leverage Looker to deliver real-time reporting, interactive data visualization, and advanced analytics. Data from issue tracking tools like Jira can provide valuable insight into historical development performance and can help you plan for future development cycles.

By leveraging a data exploration and discovery platform, product teams, engineers, scrum masters, and others can easily conduct analysis on their project data and get comprehensive, detailed, and up-to-the-minute insights into their projects that they can act on immediately.

This blog post will help you get started creating and configuring your own Looker Jira Integration so that you can get the most value from your data.

Table of contents

Introduction to Looker

Looker Dashboard
Image Source

Established in 2011, Looker Data Sciences, Inc. is a Bay Area-based enterprise platform that focuses on query-based Business Intelligence. Through the use of charts and graphs, businesses can use Looker to find, explore, process, and share information and this allows them to effectively influence course corrections and make better business decisions.

Looker’s platform works with transactional databases such as Microsoft SQL Server (MSSQL) and Oracle as well as analytical datastores like BigQuery, Snowflake, Redshift, and more. You can use Looker to crawl all your data sources and create baseline data models on top of all your data using a powerful SQL-like modeling language called LookML. The platform also provides an end-user visualization tool that allows analysts, marketers, and business owners to create easy-to-read reports and dashboards to explore patterns in data.

Introduction to Jira

Jira Dashboard
Image Source

JIRA is a proprietary issue tracking and project management software, developed by Atlassian. Jira is widely used in software development. According to, Jira is the de facto tool used by agile teams to capture and organize issues, assign work, and track team activity. 

One of the main entities in Jira is an issue, which can be used for logging stories, bug tracking, issue tracking, order tracking, task management, tracking help desk tickets, tracking leave request forms, or something else entirely depending on what way an organization uses this software.

Issues contain a lot of useful information that can be used to find patterns, potential problems, and to improve business processes. For example, they contain authors, assignees, descriptions, comments, statuses, etc.

Understanding Use Cases of Looker Jira Integration

You can use Looker to generate custom reports and dashboards that show statistics for projects, measure teamwork, track activity in your Jira so that you can make the right business decisions based on the data you receive.

Looker offers a number of comprehensive features that allow you to:

  • Get comprehensive information around an issue, including the current status and entire history of changes and updates.
  • Identify the amount of work remaining, for a given project and time period against, with the current resources allocation.
  • Evaluate your progress over time versus your goals.
  • Understand how much work is being delivered on a cumulative basis and per-sprint basis in one central view.

Ways to Set up Looker Jira Integration

Method 1: Using Custom ETL Scripts for Looker Jira Integration

The first approach is to ETL the Jira data to a SQL-compatible database or a cloud data warehouse and let it be consumed by Looker from there. This method uses the Jira Python Library to extract data from the REST API to establish Looker Jira Integration.

Method 2: Using Hevo Data for Looker Jira Integration

The second approach is to use Hevo Data to connect to your Jira instance and pull data into a table-like structure. A fully managed, No-code Data Pipeline platform like Hevo Data, helps you load data from Jira (among 100+ Sources, including 30+ Free Sources) to Looker in real-time, in an effortless manner. Hevo, with its minimal learning curve, can be set up in a matter of minutes, making the users ready to load data without compromising performance. Its strong integration with various sources such as databases, files, analytics engine, etc. gives users the flexibility to bring in data of all different kinds in a way that’s as smooth as possible, without having to write a single line of code.

Get started with Hevo Data today! Sign up for a 14-day free trial! 

Methods to Set up Looker Jira Integration

So far in this article, you have explored the basics of Looker and Jira namely their key features. This article also covered a few use cases where Looker Jira Integration proves beneficial. You can use the following two methods to establish Looker Jira Integration:

Method 1: Using Custom ETL Scripts for Looker Jira Integration

In the first step, you will be using custom ETL scripts to load the Jira data to an SQL-compatible database. This can then be consumed by Looker for carrying out Data Analysis and Business Intelligence operations. The general steps for setting up a Jira connection are as follows:


  1. Jira Server or Cloud instance.
  2. Access to a Looker environment.
  3. Access to a free version of MySQL. You can download a free version of MySQL from here
  4. Basic Python knowledge.

Step 1: Exporting Data from Jira using REST APIs

The Jira REST API enables you to interact with Jira programmatically. You can export your issues, users, user group settings, issue attachments, user avatars, and project logos using Jira’s REST API endpoints.

To export data from Jira, you will use the open-source Jira Python Library which eases the use of the Jira REST API from Python.

  • Install the Jira Python Library. The easiest and best way to download and install jira-python is through pip using the following command:
 $ pip install jira    

This will install the jira-python client as well as the requirements.

Note: Python >3.5 is required.

  • After installing this library, the next step is to export data from Jira. Suppose that you want to export issues belonging to projects ‘Project-A’ and ‘Project-B’ and created during the last 90 days. You are going to save data contained in the following columns into a CSV file: issue type, creation date and time, resolution date and time, reporter, assignee, status. Here is the python code with comments:
# Import dependencies
from jira import JIRA, JIRAError
from collections import Counter, defaultdict
from datetime import datetime
from time import sleep

import numpy as np
import pandas as pd
import networkx as nx

# Create instance for interacting with Jira
jira = JIRA(options={'server': url}, basic_auth=(username, password))

# Read data from Jira
    jql = "project in ('Project-A', 'Project-B') AND created > '-90d'"
    # Search issues
    block_size = 100
    block_num = 0
    jira_search = jira.search_issues(jql, startAt=block_num*block_size, maxResults=block_size, 
                                     fields="issuetype, created, resolutiondate, reporter, assignee, status")

    # Define parameters for writing data
    index_beg = 0
    header = True
    mode = 'w'

    # Iteratively read data
    while bool(jira_search):
        # Container for Jira's data
        data_jira = []

        for issue in jira_search:
            # Get issue key
            issue_key = issue.key

            # Get request type
            request_type = str(issue.fields.issuetype)

            # Get datetime creation
            datetime_creation = issue.fields.created
            if datetime_creation is not None:
                # Interested in only seconds precision, so slice unnecessary part
                datetime_creation = datetime.strptime(datetime_creation[:19], "%Y-%m-%dT%H:%M:%S")

            # Get datetime resolution
            datetime_resolution = issue.fields.resolutiondate
            if datetime_resolution is not None:
                # Interested in only seconds precision, so slice unnecessary part
                datetime_resolution = datetime.strptime(datetime_resolution[:19], "%Y-%m-%dT%H:%M:%S")

            # Get reporter’s login and name
            reporter_login = None
            reporter_name = None
            reporter = issue.raw['fields'].get('reporter', None)
            if reporter is not None:
                reporter_login = reporter.get('key', None)
                reporter_name = reporter.get('displayName', None)

            # Get assignee’s login and name
            assignee_login = None
            assignee_name = None
            assignee = issue.raw['fields'].get('assignee', None)
            if assignee is not None:
                assignee_login = assignee.get('key', None)
                assignee_name = assignee.get('displayName', None)

            # Get status
            status = None
            st = issue.fields.status
            if st is not None:
                status =

            # Add data to data_jira
            data_jira.append((issue_key, request_type, datetime_creation, datetime_resolution, reporter_login, reporter_name, assignee_login, assignee_name, status))

        # Write data read from Jira
        index_end = index_beg + len(data_jira)
        data_jira = pd.DataFrame(data_jira, index=range(index_beg, index_end), 
                                     columns=['Issue key', 'Request type', 'Datetime creation', 'Datetime resolution', 'Reporter login', 'Reporter name', 'Assignee login', 'Assignee name', 'Status'])
        data_jira.to_csv(path_or_buf='data_jira.csv', sep=';', header=header, index=True, index_label='N', mode=mode)

        # Update for the next iteration
        block_num = block_num + 1
        index_beg = index_end
        header = False
        mode = 'a'

        # Print how many issues were read
        if block_num % 50 == 0:
            print(block_num * block_size)

        # Pause before next reading – it’s optional, just to be sure we will not overload Jira’s server

        # New issues search
        jira_search = jira.search_issues(jql, startAt=block_num*block_size, maxResults=block_size, 
                                         fields="issuetype, created, resolutiondate, reporter, assignee, status")

except (JIRAError, AttributeError):


  • url: A Jira’s server address.
  • username and password: Username and password of the Jira user.
  • startAt: Index of the first issue to return.
  • maxResults: Maximum number of issues to return.
  • In the code above, 100 issues were read at once every iteration, and after reading the necessary fields write this data to a ‘csv’ file (jira.csv).

Step 2: Creating a Staging Database in MySQL to store Jira Data

Now that you have exported your data from Jira, the next step is to create a staging database for this data.

  • Using the MySQL client, log in to the MySQL Server using the root user.
>mysql -u root -p
Enter password: ********

Type the password for the root user and press Enter.

  • Issue the CREATE DATABASE command with the database name, in this case, jiraissuesdb, and press Enter:
mysql> CREATE DATABASE jiraissuesdb;
Query OK, 1 row affected (0.12 sec)
  • Select the Jira database for use. Creating a database in MySQL does not select it for use; you must explicitly select the newly created database by issuing the USE database command as follows:
mysql> USE jiraissuesdb;
Database changed
  • Create a MySQL table for storing your Jira data. The columns in the table need to match the data from the CSV file you exported from Jira. In our case, you will create the table using the following command:
CREATE TABLE jira_issues (
            issue_key VARCHAR(255) NOT NULL,
            request_type VARCHAR(255) NOT NULL,
            datetime_creation DATE NOT NULL,
            datetime_resolution DATE NOT NULL,
            reporter_login VARCHAR(255) NOT NULL,
            reporter_name DATE NOT NULL,
            assignee_login VARCHAR(255) NOT NULL,
            assignee_name VARCHAR(255) NOT NULL,
            status VARCHAR(255) NOT NULL,
            PRIMARY KEY (id)

Step 3: Load Jira Data in your Database Table

  • Load the data from the CSV file into the table using the LOAD DATA LOCAL command:
LOAD DATA LOCAL '/downloads/data_jira.csv'
INTO TABLE jira_issues
  • Change the path (between the quotes) to match the path and filename of your CSV file.

Step 4: Configuring your Database to work with Looker

For your database to work with Looker, you need to create and grant the required access to the Looker user using the following instructions:

  • Configure the MySQL database to use the mysql_native_password plugin to authenticate to Looker through the JDBC driver. This can be done by starting the process with the flag:
  • Next set the property in the my.cnf configuration file in the MySQL server install directory.
  • Next, issue the following statements, replacing ‘passwd’ with your desired password: 
CREATE USER looker IDENTIFIED WITH mysql_native_password BY 'passwd';
GRANT SELECT ON database_name.* TO 'looker'@'%';
  • Enable the creation of persistent derived tables (PDTs) by setting up a temp schema in MySQL. To create a temp database and to grant the required privileges to the looker user, issue the following commands:
CREATE SCHEMA looker_tmp;
ON looker_tmp.* TO 'looker'@'%';
  • Set the max_allowed_packet variable to its maximum value, 1G, to prevent “SQLException: Packet for query is too large” errors. To do this, open the my.ini file located in the MySQL server install directory and search for the “max_allowed_packet” parameter. Set it to:
  • Restart the MySQL server for the changes to take effect.

Step 5: Connecting Looker to the Database

  • Select Connections from the Database section in the Looker Admin panel. On the Connections page, click the Add Connection button.
  • Looker will display the Connection Settings page. Set the name of the connection as you would like to refer to it. Select MySQL as your SQL dialect.
  • Enter your database hostname and port number that Looker will use to connect to your database host. Enter jiraissuesdb as the name of your database.
  • Enter root as your username and also enter the password for the root user. Check the Persistent Derived Tables box to enable persistent derived tables. Leave the rest of the settings to their default values.
  • Once you’ve entered the credentials, click Test These Settings to verify that the information is correct and the database is able to connect. Once you have configured and tested your database connection settings, click Add Connection. Your database connection is now added to the list on the Connections page. 
Looker Jira Integration
Image Source
  • You can click the Test button to test the connection, the Edit button to edit the connection, or the gear icon to perform other actions on the connection

Limitations of ETL Scripts for Looker Jira Integration

This method that you have just looked at has some limitations:

  • This method doesn’t support real-time streaming. It is suited for batch jobs.
  • There is a lot of configuration required to set up and maintain the data pipeline.

Method 2: Using Hevo Data for Looker Jira Integration

Hevo Logo
Image Source

Hevo Data, a No-code Data Pipeline helps you transfer data from Jira (among 100+ sources) to Business Intelligence tools such as Looker, Data Warehouses, or a destination of your choice in a completely hassle-free & automated manner. Hevo is fully managed and completely automates the process of not only loading data from your desired source but also enriching the data and transforming it into an analysis-ready form without having to write a single line of code. Its fault-tolerant architecture ensures that the data is handled in a secure, consistent manner with zero data loss.

It provides a consistent & reliable solution to manage data in real-time and always have analysis-ready data in your desired destination. It allows you to focus on key business needs and perform insightful analysis using various BI tools such as Looker, Tableau, etc. 

Steps to use Hevo Data for Looker Jira Integration:

Hevo Data focuses on two simple steps to get you started:

  • Extract: Connect Hevo with Jira and various other data sources by simply logging in with your credentials and extract information seamlessly.
  • Integrate: Consolidate your data from several sources in Hevo’s Managed Data Warehouse Platform and automatically transform it into an analysis-ready form.
  • Visualize: Connect Hevo with your desired BI tool such as Looker and easily visualize your data to gain better insights.

Check Out What Makes Hevo Amazing:

  • Secure: Hevo has a fault-tolerant architecture that ensures that the data is handled in a secure, consistent manner with zero data loss.
  • Schema Management: Hevo takes away the tedious task of schema management & automatically detects schema of incoming data and maps it to the destination schema.
  • Minimal Learning: Hevo, with its simple and interactive UI, is extremely simple for new customers to work on and perform operations.
  • Hevo Is Built To Scale: As the number of sources and the volume of your data grows, Hevo scales horizontally, handling millions of records per minute with very little latency.
  • Incremental Data Load: Hevo allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
  • Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
  • Live Monitoring: Hevo allows you to monitor the data flow and check where your data is at a particular point in time.

Simplify your data analysis with Hevo today! Sign up here for a 14-day free trial!


This article talks about the two simple methods you can utilize to establish Looker Jira Integration after covering the basics of the two tools namely their key features.

You can now load Jira data to any Data Warehouse such as Redshift, BigQuery, Snowflake, or a destination of your choice like Looker without writing code in just a few minutes for free with Hevo.

Extracting complex data from a diverse set of data sources can be challenging, and this is where Hevo saves the day! Hevo offers a faster way to move data from Databases or SaaS applications into your Data Warehouse to be visualized in a BI tool. Hevo is fully automated and hence does not require you to code. You can try Hevo for free by signing up for a 14-day free trial. You can also have a look at the unbeatable pricing that will help you choose the right plan for your business needs.

Software Developer, Hevo Data

Talha is a seasoned Software Developer, currently driving advancements in data integration at Hevo Data, where he have been instrumental in shaping a cutting-edge data integration platform for the past four years. With a significant tenure at Flipkart prior to their current role, he brought innovative solutions to the space of data connectivity and software development.

No-code Data Pipeline for Looker