This blog post will help you get started creating and configuring your own Looker Jira Integration so that you can get the most value from your data.

  • How you can leverage Looker to deliver real-time reporting, interactive data visualization, and advanced analytics.
  • Data from issue tracking tools like Jira can provide valuable insight into historical development performance and can help you plan for future development cycles.
  • By leveraging a data exploration and discovery platform, product teams, engineers, scrum masters, and others can easily conduct analysis on their project data and get comprehensive, detailed, and up-to-the-minute insights into their projects that they can act on immediately.

Introduction to Looker

  • Looker’s platform works with transactional databases such as Microsoft SQL Server (MSSQL) and Oracle as well as analytical datastores like BigQuery, Snowflake, Redshift, and more.
  • You can use Looker to crawl all your data sources and create baseline data models on top of all your data using a powerful SQL-like modeling language called LookML.
  • The platform also provides an end-user visualization tool that allows analysts, marketers, and business owners to create easy-to-read reports and dashboards to explore patterns in data.
Looker Dashboard
Image Source

Introduction to Jira

  • JIRA is a proprietary issue tracking and project management software, developed by Atlassian. Jira is widely used in software development.
  • One of the main entities in Jira is an issue, which can be used for logging stories, bug tracking, issue tracking, order tracking, task management, tracking help desk tickets, tracking leave request forms, or something else entirely depending on what way an organization uses this software.
  • Issues contain a lot of useful information that can be used to find patterns, potential problems, and to improve business processes. For example, they contain authors, assignees, descriptions, comments, statuses, etc.
Jira Dashboard
Image Source


  1. Jira Server or Cloud instance.
  2. Access to a Looker environment.
  3. Access to a free version of MySQL. You can download a free version of MySQL from here
  4. Basic Python knowledge.

Two Methods to Set up Looker Jira Integration

You can use the following two methods to establish Looker Jira Integration:

Method 1: Using Custom ETL Scripts for Looker Jira Integration

Step 1: Exporting Data from Jira using REST APIs

The Jira REST API enables you to interact with Jira programmatically. You can export your issues, users, user group settings, issue attachments, user avatars, and project logos using Jira’s REST API endpoints.

To export data from Jira, you will use the open-source Jira Python Library which eases the use of the Jira REST API from Python.

  • Install the Jira Python Library. The easiest and best way to download and install jira-python is through pip using the following command:
 $ pip install jira    

This will install the jira-python client as well as the requirements.

Note: Python >3.5 is required.

  • After installing this library, the next step is to export data from Jira. Suppose that you want to export issues belonging to projects ‘Project-A’ and ‘Project-B’ and created during the last 90 days. You are going to save data contained in the following columns into a CSV file: issue type, creation date and time, resolution date and time, reporter, assignee, status. Here is the python code with comments:
# Import dependencies
from jira import JIRA, JIRAError
from collections import Counter, defaultdict
from datetime import datetime
from time import sleep

import numpy as np
import pandas as pd
import networkx as nx

# Create instance for interacting with Jira
jira = JIRA(options={'server': url}, basic_auth=(username, password))

# Read data from Jira
    jql = "project in ('Project-A', 'Project-B') AND created > '-90d'"
    # Search issues
    block_size = 100
    block_num = 0
    jira_search = jira.search_issues(jql, startAt=block_num*block_size, maxResults=block_size, 
                                     fields="issuetype, created, resolutiondate, reporter, assignee, status")

    # Define parameters for writing data
    index_beg = 0
    header = True
    mode = 'w'

    # Iteratively read data
    while bool(jira_search):
        # Container for Jira's data
        data_jira = []

        for issue in jira_search:
            # Get issue key
            issue_key = issue.key

            # Get request type
            request_type = str(issue.fields.issuetype)

            # Get datetime creation
            datetime_creation = issue.fields.created
            if datetime_creation is not None:
                # Interested in only seconds precision, so slice unnecessary part
                datetime_creation = datetime.strptime(datetime_creation[:19], "%Y-%m-%dT%H:%M:%S")

            # Get datetime resolution
            datetime_resolution = issue.fields.resolutiondate
            if datetime_resolution is not None:
                # Interested in only seconds precision, so slice unnecessary part
                datetime_resolution = datetime.strptime(datetime_resolution[:19], "%Y-%m-%dT%H:%M:%S")

            # Get reporter’s login and name
            reporter_login = None
            reporter_name = None
            reporter = issue.raw['fields'].get('reporter', None)
            if reporter is not None:
                reporter_login = reporter.get('key', None)
                reporter_name = reporter.get('displayName', None)

            # Get assignee’s login and name
            assignee_login = None
            assignee_name = None
            assignee = issue.raw['fields'].get('assignee', None)
            if assignee is not None:
                assignee_login = assignee.get('key', None)
                assignee_name = assignee.get('displayName', None)

            # Get status
            status = None
            st = issue.fields.status
            if st is not None:
                status =

            # Add data to data_jira
            data_jira.append((issue_key, request_type, datetime_creation, datetime_resolution, reporter_login, reporter_name, assignee_login, assignee_name, status))

        # Write data read from Jira
        index_end = index_beg + len(data_jira)
        data_jira = pd.DataFrame(data_jira, index=range(index_beg, index_end), 
                                     columns=['Issue key', 'Request type', 'Datetime creation', 'Datetime resolution', 'Reporter login', 'Reporter name', 'Assignee login', 'Assignee name', 'Status'])
        data_jira.to_csv(path_or_buf='data_jira.csv', sep=';', header=header, index=True, index_label='N', mode=mode)

        # Update for the next iteration
        block_num = block_num + 1
        index_beg = index_end
        header = False
        mode = 'a'

        # Print how many issues were read
        if block_num % 50 == 0:
            print(block_num * block_size)

        # Pause before next reading – it’s optional, just to be sure we will not overload Jira’s server

        # New issues search
        jira_search = jira.search_issues(jql, startAt=block_num*block_size, maxResults=block_size, 
                                         fields="issuetype, created, resolutiondate, reporter, assignee, status")

except (JIRAError, AttributeError):


  • url: A Jira’s server address.
  • username and password: Username and password of the Jira user.
  • startAt: Index of the first issue to return.
  • maxResults: Maximum number of issues to return.
  • In the code above, 100 issues were read at once every iteration, and after reading the necessary fields write this data to a ‘csv’ file (jira.csv).

Step 2: Creating a Staging Database in MySQL to store Jira Data

Now that you have exported your data from Jira, the next step is to create a staging database for this data.

  • Using the MySQL client, log in to the MySQL Server using the root user.
>mysql -u root -p
Enter password: ********

Type the password for the root user and press Enter.

  • Issue the CREATE DATABASE command with the database name, in this case, jiraissuesdb, and press Enter:
mysql> CREATE DATABASE jiraissuesdb;
Query OK, 1 row affected (0.12 sec)
  • Select the Jira database for use. Creating a database in MySQL does not select it for use; you must explicitly select the newly created database by issuing the USE database command as follows:
mysql> USE jiraissuesdb;
Database changed
  • Create a MySQL table for storing your Jira data. The columns in the table need to match the data from the CSV file you exported from Jira. In our case, you will create the table using the following command:
CREATE TABLE jira_issues (
            issue_key VARCHAR(255) NOT NULL,
            request_type VARCHAR(255) NOT NULL,
            datetime_creation DATE NOT NULL,
            datetime_resolution DATE NOT NULL,
            reporter_login VARCHAR(255) NOT NULL,
            reporter_name DATE NOT NULL,
            assignee_login VARCHAR(255) NOT NULL,
            assignee_name VARCHAR(255) NOT NULL,
            status VARCHAR(255) NOT NULL,
            PRIMARY KEY (id)

Step 3: Load Jira Data in your Database Table

  • Load the data from the CSV file into the table using the LOAD DATA LOCAL command:
LOAD DATA LOCAL '/downloads/data_jira.csv'
INTO TABLE jira_issues
  • Change the path (between the quotes) to match the path and filename of your CSV file.

Step 4: Configuring your Database to work with Looker

For your database to work with Looker, you need to create and grant the required access to the Looker user using the following instructions:

  • Configure the MySQL database to use the mysql_native_password plugin to authenticate to Looker through the JDBC driver. This can be done by starting the process with the flag:
  • Next set the property in the my.cnf configuration file in the MySQL server install directory.
  • Next, issue the following statements, replacing ‘passwd’ with your desired password: 
CREATE USER looker IDENTIFIED WITH mysql_native_password BY 'passwd';
GRANT SELECT ON database_name.* TO 'looker'@'%';
  • Enable the creation of persistent derived tables (PDTs) by setting up a temp schema in MySQL. To create a temp database and to grant the required privileges to the looker user, issue the following commands:
CREATE SCHEMA looker_tmp;
ON looker_tmp.* TO 'looker'@'%';
  • Set the max_allowed_packet variable to its maximum value, 1G, to prevent “SQLException: Packet for query is too large” errors. To do this, open the my.ini file located in the MySQL server install directory and search for the “max_allowed_packet” parameter. Set it to:
  • Restart the MySQL server for the changes to take effect.

Step 5: Connecting Looker to the Database

  • Select Connections from the Database section in the Looker Admin panel. On the Connections page, click the Add Connection button.
  • Looker will display the Connection Settings page. Set the name of the connection as you would like to refer to it. Select MySQL as your SQL dialect.
  • Enter your database hostname and port number that Looker will use to connect to your database host. Enter jiraissuesdb as the name of your database.
  • Enter root as your username and also enter the password for the root user. Check the Persistent Derived Tables box to enable persistent derived tables. Leave the rest of the settings to their default values.
  • Once you’ve entered the credentials, click Test These Settings to verify that the information is correct and the database is able to connect. Once you have configured and tested your database connection settings, click Add Connection. Your database connection is now added to the list on the Connections page. 
Looker Jira Integration
Image Source
  • You can click the Test button to test the connection, the Edit button to edit the connection, or the gear icon to perform other actions on the connection

Limitations of ETL Scripts for Looker Jira Integration

This method that you have just looked at has some limitations:

  • This method doesn’t support real-time streaming. It is suited for batch jobs.
  • There is a lot of configuration required to set up and maintain the data pipeline.

Method 2: Using Hevo Data for Looker Jira Integration

Hevo Data, a No-code Data Pipeline helps you transfer data from Jira (among 100+ sources) to Business Intelligence tools such as Looker, Data Warehouses, or a destination of your choice in a completely hassle-free & automated manner. Hevo is fully managed and completely automates the process of not only loading data from your desired source but also enriching the data and transforming it into an analysis-ready form without having to write a single line of code. Its fault-tolerant architecture ensures that the data is handled in a secure, consistent manner with zero data loss.

It provides a consistent & reliable solution to manage data in real-time and always have analysis-ready data in your desired destination. It allows you to focus on key business needs and perform insightful analysis using various BI tools such as Looker, Tableau, etc. 

Steps to use Hevo Data for Looker Jira Integration:

Hevo Data focuses on two simple steps to get you started:

  • Extract: Connect Hevo with Jira and various other data sources by simply logging in with your credentials and extract information seamlessly.
  • Integrate: Consolidate your data from several sources in Hevo’s Managed Data Warehouse Platform and automatically transform it into an analysis-ready form.
  • Visualize: Connect Hevo with your desired BI tool such as Looker and easily visualize your data to gain better insights.

Simplify your data analysis with Hevo today! Sign up here for a 14-day free trial!

Understanding Use Cases of Looker Jira Integration

Looker offers a number of comprehensive features that allow you to:

  • Get comprehensive information around an issue, including the current status and entire history of changes and updates.
  • Identify the amount of work remaining, for a given project and time period against, with the current resources allocation.
  • Evaluate your progress over time versus your goals.
  • Understand how much work is being delivered on a cumulative basis and per-sprint basis in one central view.


  • This article talks about the two simple methods you can utilize to establish Looker Jira Integration after covering the basics of the two tools namely their key features.
Software Developer, Hevo Data

Talha is a Software Developer with over eight years of experience in the field. He is currently driving advancements in data integration at Hevo Data, where he has been instrumental in shaping a cutting-edge data integration platform for the past four years. Prior to this, he spent 4 years at Flipkart, where he played a key role in projects related to their data integration capabilities. Talha loves to explain complex information related to data engineering to his peers through writing. He has written many blogs related to data integration, data management aspects, and key challenges data practitioners face.

No-code Data Pipeline for Looker