Implementing a Looker Jira integration into your system architecture enables your team to transform data from issue tracking into dynamic dashboards with meaningful insights. Change makers in your company can track performance more easily, spot bottlenecks, and develop better long-term plans with this integration. Looker’s powerful statistics and real-time reporting abilities turn data from Jira into a strategic report that helps you make faster, better-informed choices that lead to results

Key project indicators like sprint progress, issue resolution timelines, and backlog status are simple to monitor using Looker. Everyone on the team can make well-informed decisions that drive projects thanks to this connection, which simplifies access to crucial data.

    Introduction to Looker

    Looker Data Sciences Incorporated is a computer software company founded in the United States, with its headquarters located in Santa Cruz, California. It was started in January 2012 by Lloyd Tabb and Ben Porterfield and was acquired by Google in 2019, making it part of the Google Cloud Platform. Looker’s platform works with transactional databases, such as Microsoft SQL Server (MSSQL) and Oracle, as well as analytical data stores like BigQuery, Snowflake, Redshift, and more. You can use Looker to crawl all your data sources and create baseline data models on top of all your data using a powerful SQL-like modeling language called LookML.

    The platform also provides an end-user visualization tool that allows analysts, marketers, and business owners to create easy-to-read reports and dashboards to explore patterns in data.

    Looker Dashboard
    Image Source

    Introduction to Jira

    JIRA is a proprietary issue-tracking and project management software developed by Atlassian. Jira is widely used in software development.

    One of the primary entities in Jira is an issue, which can be used for logging stories, tracking bugs, managing tasks, tracking help desk tickets, managing leave request forms, or other purposes, depending on how an organization utilizes this software. Issues contain a wealth of useful information that can be used to identify patterns, potential problems, and to enhance business processes. For example, they contain authors, assignees, descriptions, comments, statuses, etc.

    Image Source
      Effortlessly Perform Jira Integrations with Hevo

      Streamline your Jira data integration with Hevo and unlock powerful insights with ease. Here’s how Hevo enhances your Jira integration experience:

      • Comprehensive Data Integration: Migrate all your Jira data to get a holistic view of your projects and performance.
      • Automated Data Transformation: Automatically clean and transform Jira data to make it compatible for analysis across diverse platforms.
      • Real-Time Data Sync: Keep your Jira data updated in real-time, ensuring your reports and dashboards reflect the latest information.

      Join over 2000+ satisfied customers who trust Hevo and experience a smooth data migration process with us.

      Streamline Your Jira Integration Now

      Use Cases of Looker Jira Integration

      Looker offers some comprehensive features that allow you to:

      • Get comprehensive information on an issue, including its current status and the entire history of changes and updates.
      • Identify the amount of work remaining for a given project and time period against the current resource allocation.
      • Evaluate your progress over time in relation to your goals.

      Using Custom ETL Scripts for Looker Jira Integration

      Step 1: Exporting Data from Jira using REST APIs

      The Jira REST API allows you to interact with Jira programmatically. You can export your issues, users, user group settings, issue attachments, user avatars, and project logos using Jira’s REST API endpoints.

      To export data from Jira, you will use the open-source Jira Python Library, which eases the use of the Jira REST API from Python.

      • Install the Jira Python Library. The easiest and best way to download and install Jira-Python is through pip using the following command:
       $ pip install jira    

      This will install the jira-python client as well as the requirements.

      Note: Python >3.5 is required.

      • After installing this library, the next step is to export data from Jira. Suppose that you want to export issues belonging to projects ‘Project-A’ and ‘Project-B’, and created during the last 90 days. You will save the data contained in the following columns into a CSV file: issue type, creation date and time, resolution date and time, reporter, assignee, and status. Here is the Python code with comments:
      # Import dependencies
      from jira import JIRA, JIRAError
      from collections import Counter, defaultdict
      from datetime import datetime
      from time import sleep
      
      import numpy as np
      import pandas as pd
      import networkx as nx
      
      # Create instance for interacting with Jira
      jira = JIRA(options={'server': url}, basic_auth=(username, password))
      
      
      # Read data from Jira
      try:
          jql = "project in ('Project-A', 'Project-B') AND created > '-90d'"
          # Search issues
          block_size = 100
          block_num = 0
          jira_search = jira.search_issues(jql, startAt=block_num*block_size, maxResults=block_size, 
                                           fields="issuetype, created, resolutiondate, reporter, assignee, status")
      
          # Define parameters for writing data
          index_beg = 0
          header = True
          mode = 'w'
      
          # Iteratively read data
          while bool(jira_search):
              # Container for Jira's data
              data_jira = []
      
              for issue in jira_search:
                  # Get issue key
                  issue_key = issue.key
      
                  # Get request type
                  request_type = str(issue.fields.issuetype)
      
                  # Get datetime creation
                  datetime_creation = issue.fields.created
                  if datetime_creation is not None:
                      # Interested in only seconds precision, so slice unnecessary part
                      datetime_creation = datetime.strptime(datetime_creation[:19], "%Y-%m-%dT%H:%M:%S")
      
                  # Get datetime resolution
                  datetime_resolution = issue.fields.resolutiondate
                  if datetime_resolution is not None:
                      # Interested in only seconds precision, so slice unnecessary part
                      datetime_resolution = datetime.strptime(datetime_resolution[:19], "%Y-%m-%dT%H:%M:%S")
      
                  # Get reporter’s login and name
                  reporter_login = None
                  reporter_name = None
                  reporter = issue.raw['fields'].get('reporter', None)
                  if reporter is not None:
                      reporter_login = reporter.get('key', None)
                      reporter_name = reporter.get('displayName', None)
      
                  # Get assignee’s login and name
                  assignee_login = None
                  assignee_name = None
                  assignee = issue.raw['fields'].get('assignee', None)
                  if assignee is not None:
                      assignee_login = assignee.get('key', None)
                      assignee_name = assignee.get('displayName', None)
      
                  # Get status
                  status = None
                  st = issue.fields.status
                  if st is not None:
                      status = st.name
      
                  # Add data to data_jira
                  data_jira.append((issue_key, request_type, datetime_creation, datetime_resolution, reporter_login, reporter_name, assignee_login, assignee_name, status))
      
              # Write data read from Jira
              index_end = index_beg + len(data_jira)
              data_jira = pd.DataFrame(data_jira, index=range(index_beg, index_end), 
                                           columns=['Issue key', 'Request type', 'Datetime creation', 'Datetime resolution', 'Reporter login', 'Reporter name', 'Assignee login', 'Assignee name', 'Status'])
              data_jira.to_csv(path_or_buf='data_jira.csv', sep=';', header=header, index=True, index_label='N', mode=mode)
      
              # Update for the next iteration
              block_num = block_num + 1
              index_beg = index_end
              header = False
              mode = 'a'
      
              # Print how many issues were read
              if block_num % 50 == 0:
                  print(block_num * block_size)
      
              # Pause before next reading – it’s optional, just to be sure we will not overload Jira’s server
              sleep(1)
      
              # New issues search
              jira_search = jira.search_issues(jql, startAt=block_num*block_size, maxResults=block_size, 
                                               fields="issuetype, created, resolutiondate, reporter, assignee, status")
      
          jira.close()
      except (JIRAError, AttributeError):
          jira.close()
          print('Error')

      Where:

      • url: A Jira server address.
      • Username and password: The username and password of the Jira user.
      • startAt: Index of the first issue to return.
      • maxResults: Maximum number of matters to return.
      • In the code above, 100 issues were read at once every iteration, and after reading the necessary field,s write this data to a ‘csv’ file (jira.csv).

      Step 2: Creating a Staging Database in MySQL to store Jira Data

      Now that you have exported your data from Jira, the next step is to create a staging database for this data.

      • Using the MySQL client, log in to the MySQL Server using the root user.
      >mysql -u root -p
      Enter password: ********

      Type the password for the root user and press “Enter”.

      • Issue the “CREATE DATABASE” command with the database name, in this case, jiraissuesdb, and press Enter:
      mysql> CREATE DATABASE jiraissuesdb;
      Query OK, 1 row affected (0.12 sec)
      • Select the Jira database for use. Creating a database in MySQL does not select it for use; you must explicitly select the newly created database by issuing the “USE database” command as follows:
      mysql> USE jiraissuesdb;
      Database changed
      • Create a MySQL table for storing your Jira data. The columns in the table need to match the data from the CSV file you exported from Jira. In our case, you will create the table using the following command:
      CREATE TABLE jira_issues (
                  id INT NOT NULL AUTO_INCREMENT,
                  issue_key VARCHAR(255) NOT NULL,
                  request_type VARCHAR(255) NOT NULL,
                  datetime_creation DATE NOT NULL,
                  datetime_resolution DATE NOT NULL,
                  reporter_login VARCHAR(255) NOT NULL,
                  reporter_name DATE NOT NULL,
                  assignee_login VARCHAR(255) NOT NULL,
                  assignee_name VARCHAR(255) NOT NULL,
                  status VARCHAR(255) NOT NULL,
                  PRIMARY KEY (id)
      );

      Step 3: Load Jira Data in your Database Table

      • Load the data from the CSV file into your table using the “LOAD DATA LOCAL” command:
      LOAD DATA LOCAL '/downloads/data_jira.csv'
      INTO TABLE jira_issues
      FIELDS TERMINATED BY ','
      ENCLOSED BY '"'
      LINES TERMINATED BY '/n'
      IGNORE 1 ROWS;
      • Change the path (between the quotes) to match the path and filename of your CSV file.

      Step 4: Configuring your Database to work with Looker

      For your database to work with Looker, you need to create and grant the required access to the Looker user using the following instructions:

      • Configure the MySQL database to use the “mysql_native_password” plugin for authentication when connecting to Looker through the JDBC driver. This can be done by starting the process with the flag:
      default-auth=mysql_native_password
      • Next, set the property in the “my.cnf” configuration file in the MySQL server install directory.
      [mysqld]
      default-authentication-plugin=mysql_native_password
      • Next, issue the following statements, replacing ‘passwd’ with your desired password: 
      CREATE USER looker IDENTIFIED WITH mysql_native_password BY 'passwd';
      GRANT SELECT ON database_name.* TO 'looker'@'%';
      • Enable the creation of persistent derived tables (PDTs) by setting up a temp schema in MySQL. To create a temp database and to grant the required privileges to the looker user, issue the following commands:
      CREATE SCHEMA looker_tmp;
      GRANT
        SELECT,
        INDEX,
        INSERT,
        UPDATE,
        DELETE,
        CREATE,
        DROP,
        ALTER,
        CREATE TEMPORARY TABLES
      ON looker_tmp.* TO 'looker'@'%';
      • Set the “max_allowed_packet” variable to its maximum value, 1G, to prevent “SQLException: Packet for query is too large” errors. To do this, open the my.ini file located in the MySQL server install directory and search for the “max_allowed_packet” parameter. Set it to:
      max_allowed_packet=1G
      • Restart the MySQL server for the changes to take effect.

      Step 5: Connecting Looker to the Database

      • Select “Connections” from the “Database” section in the Looker “Admin” panel. On the “Connections” page, click the “Add Connection” button.
      • Looker will display the “Connection Settings” page. Set the name of the connection as you would like to refer to it. Select MySQL as your SQL dialect.
      • Enter your database hostname and port number that Looker will use to connect to your database host. Enter “jiraissuesdb” as the name of your database.
      • Enter “root” as your username, and also enter the password for the “root” user. Check the Persistent Derived Tables box to enable persistent derived tables. Leave the remaining settings at their default values.
      • Once you’ve entered the credentials, click “Test These Settings” to verify that the information is correct and the database can connect. Once you have configured and tested your database connection settings, click “Add Connection”. Your database connection is now added to the list on the “Connections” page.
      Looker Jira Integration
      Image Source
      • You can click the Test button to test the connection, the Edit button to edit the connection, or the gear icon to perform other actions on the connection

      Limitations of ETL Scripts for Looker Jira Integration

      This method that you have just looked at has some limitations:

      • This method doesn’t support real-time streaming. It is suited for batch jobs.
      • A significant amount of configuration is required to set up and maintain the data pipeline.

      Conclusion

      In this guide, we’ve shown you how integrating Looker with Jira can significantly enhance data-driven decision-making for development teams. Whether you prefer using custom ETL scripts for more control or opt for data pipelines for simplicity and scalability, both methods offer unique benefits.

      By setting up this integration, you can easily analyze Jira data in Looker, gaining real-time insights that improve team efficiency and performance. This empowers your stakeholders to make more informed decisions, streamline workflows, and achieve better results.

      Simplify your data analysis with Hevo today! Sign up here for a 14-day free trial!

      FAQ on Looker Jira Integration

      Can Looker be integrated with Jira?

      Yes. Looker can be integrated with Jira through various methods, such as using Jira’s REST API to pull data into Looker or using third-party connectors to facilitate integration.

      Can you integrate with Jira?

      Yes. Many platforms and tools can integrate with Jira through direct integrations provided by the tool or via Jira’s REST API. Integration options include project management tools, reporting and BI tools, automation services, and more.

      Does Looker have an API?

      Yes. Looker has an API that allows you to programmatically access data, manage Looker instances, run queries, and retrieve results. The Looker API can integrate Looker with other applications and automate workflows.

      Talha
      Software Developer, Hevo Data

      Talha is a Software Developer with over eight years of experience in the field. He is currently driving advancements in data integration at Hevo Data, where he has been instrumental in shaping a cutting-edge data integration platform for the past four years. Prior to this, he spent 4 years at Flipkart, where he played a key role in projects related to their data integration capabilities. Talha loves to explain complex information related to data engineering to his peers through writing. He has written many blogs related to data integration, data management aspects, and key challenges data practitioners face.