Building an entirely new data connector is difficult, especially when you’re already heavily swamped with monitoring and maintaining your existing custom data pipelines. When you have an ad-hoc Harvest to Databricks connection request from your HR and accounting teams, you’ll have to compromise your engineering bandwidth with this manual task. We understand you are pressed for time and require a quick solution. If the situation demands that you download and submit a couple of CSV files, then this task would seem like a cheery pie. But with the increase in the frequency of data replication and the number of connectors you can even choose to use an automated solution.
Well, worry no further! We’ve prepared a simple and straightforward guide to help you replicate data from Harvest to Databricks. Enough talk! Let’s get to it.
How to Replicate Data From Harvest to Databricks?
Harvest, being a cloud-based time tracking and invoice generation software, helps in expense tracking, project management, billable hours & working hours tracking, task assignment, invoicing, scheduling, and many more.
To replicate data from Harvest to Databricks, you can either:
- Use CSV files or
- A no-code automated solution.
We’ll cover replication via CSV files next.
Replicate Data from Harvest to Databricks Using CSV Files
Let’s dive into the process of replicating data from Harvest to Databricks in CSV format:
Step 1: Export Data from Harvest
Here’s the data that you can export from your Harvest account:
- Clients and Client Contacts
And the different types of reports in your Harvest account are:
- Time Report
- Detailed Time Report
- Detailed Expense Report
- Uninvoiced Report
- Invoiced Report
- Payments Received Report
- Contractor Report
Let’s dive into the steps for exporting data from Harvest in CSV format.
For exporting data about Contacts, People, Clients, and Tasks
- Click on the “Manage” tab.
- Now, click on the “Export” button at the top of the page.
- A list of your required data will be downloaded in Excel or CSV format.
For exporting Projects data
- Click on the “Projects” tab.
- Now, click on the “Export” button at the top of the page.
- You can choose whether you want to export all active projects, only budgeted projects, or only archived projects.
- A list of your required project data will be downloaded in CSV format.
For exporting Reports
- Click on the “Reports” tab.
- Select the name of the report you want to export.
- Now add all the specifications required specific to that report, such as timeframe, projects, filters, etc.
- Click the “Run Report” button.
- Now, click on the “Export” button at the top of the page. Then, choose the format as CSV.
For exporting Estimates data
- Click on the “Estimates” tab.
- To export a list of open estimates, click on the “Open” option and select the “Export” button.
- To export all of your estimates, click on the “All estimates” option. Now, you can filter the data by estimate status, client, or timeframe. Then select the “Export” button.
- Now, you can choose your format as CSV. A list of your estimates data will be downloaded in CSV format.
Step 2: Import CSV Files into Databricks
- In the Databricks UI, go to the side navigation bar. Click on the “Data” option.
- Now, you need to click on the “Create Table” option.
- Then drag the required CSV files to the drop zone. Otherwise, you can browse the files in your local system and then upload them.
Once the CSV files are uploaded, your file path will look like: /FileStore/tables/<fileName>-<integer>.<fileType>
If you click on the “Create Table with UI” button, then go through the following steps:
- Then select the cluster where you want to preview your table.
- Click on the “Preview Article” button. Then, specify the table attributes such as table name, database name, file type, etc.
- Then, select the “Create Table” button.
- Now, the database schema and sample data will be displayed on the screen.
Once you click on the “Create Table in Notebook” button, you can go through the following steps:
- A Python notebook is created in the selected cluster.
- You can edit the table attributes and format using the necessary Python code. You can refer to the below image for reference:
- You can also run queries on SQL in the notebook to get a basic understanding of the data frame and its description.
In this case, the name of the table is “emp_csv.” However, in your case, we can keep it according to your requirements.
- Now, on top of the Pandas data frame, you need to create and save your table in the default database or any other database of your choice.
In the above table, “mytestdb” is a database where we intend to save our table.
- After you save the table, you can click on the “Data” button in the left navigation pane and check whether the table has been saved in the database of your choice.
Step 3: Modify & Access the Data
- The data now gets uploaded to Databricks. You can access the data via the Import & Explore Data section on the landing page.
- To modify the data, select a cluster and click on the “Preview Table” option.
- Then, change the attributes accordingly and select the “Create Table” option.
This 3-step approach is beneficial for replicating data in CSV format to connect Harvest to Databricks in the following scenarios:
- Low-frequency Data Replication: This method is appropriate when your product and marketing teams need the Harvest data only once in an extended period, i.e., monthly, quarterly, yearly, or just once.
- Dedicated Personnel: If your organization has dedicated people who have to select categories manually, customize templates, then download and upload CSV files, then accomplishing this task is not much of a headache.
- Low Volume Data: It can be a tedious task to repeatedly select different categories, select templates or customize them, and download & upload CSV files. Moreover, merging these CSV files from multiple departments is time-consuming if you are trying to measure the business’s overall performance. Hence, this method is optimal for replicating only a few files.
When the frequency of replicating data from Harvest increases, this process becomes highly monotonous. It adds to your misery when you have to transform the raw data every single time. With the increase in data sources, you would have to spend a significant portion of your engineering bandwidth creating new data connectors. Just imagine — building custom connectors for each source, transforming & processing the data, tracking the data flow individually, and fixing issues. Doesn’t it sound exhausting?
Instead, you should be focusing on more productive tasks. Being relegated to the role of a ‘Big Data Plumber‘ that spends their time mostly repairing and creating the data pipeline might not be the best use of your time.
To start reclaiming your valuable time, you can…
Replicate Data from Harvest to Databricks Using an Automated ETL Tool
Going all the way to write custom scripts for every new data connector request is not the most efficient and economical solution. Frequent breakages, pipeline errors, and lack of data flow monitoring make scaling such a system a nightmare.
You can streamline the Harvest to Databricks data replication process by opting for an automated tool. Here are the benefits of leveraging an automated no-code tool:
- It allows you to focus on core engineering objectives while your business teams can jump on to reporting without any delays or data dependency on you.
- Your sales & support teams can effortlessly enrich, filter, aggregate, and segment raw Harvest data with just a few clicks.
- The beginner-friendly UI saves the engineering team hours of productive time lost due to tedious data preparation tasks.
- Without coding knowledge, your analysts can seamlessly create thorough reports for various business verticals to drive better decisions.
- Your business teams get to work with near-real-time data with no compromise on the accuracy & consistency of the analysis.
- You get all your analytics-ready data in one place. With this, you can quickly measure your business performance and deep dive into your Harvest data to explore new market opportunities.
For instance, here’s how Hevo Data, a cloud-based ETL tool, makes the process to connect Harvest to Databricks ridiculously easy:
Step 1: Configure Harvest as a Source
Step 2: Configure Databricks as a Destination
All Done to Setup Your ETL Pipeline
After implementing the 2 simple steps, Hevo Data will take care of building the pipeline for setting up Harvest to Databricks integration based on the inputs given by you while configuring the source and the destination.
The pipeline will automatically replicate new and updated data from Harvest to Databricks every 1 hr (by default). However, you can also adjust the data replication frequency as per your requirements.
Data Pipeline Frequency
|Default Pipeline Frequency||Minimum Pipeline Frequency||Maximum Pipeline Frequency||Custom Frequency Range (Hrs)|
|1 Hr||1 Hr||24 Hrs||1-24|
For in-depth knowledge of how a pipeline is built & managed in Hevo Data, you can also visit the official documentation for Harvest as a source and Databricks as a destination.
You don’t need to worry about security and data loss. Hevo’s fault-tolerant architecture will stand as a solution to numerous problems. It will enrich your data and transform it into an analysis-ready form without having to write a single line of code.
By employing Hevo to simplify your data integration needs, you can leverage its salient features:
Get started for Free with Hevo Data!
- Reliability at Scale: With Hevo Data, you get a world-class fault-tolerant architecture that scales with zero data loss and low latency.
- Monitoring and Observability: Monitor pipeline health with intuitive dashboards that reveal every state of the pipeline and data flow. Bring real-time visibility into your ELT with Alerts and Activity Logs.
- Stay in Total Control: When automation isn’t enough, Hevo Data offers flexibility – data ingestion modes, ingestion, and load frequency, JSON parsing, destination workbench, custom schema management, and much more – for you to have total control.
- Auto-Schema Management: Correcting improper schema after the data is loaded into your warehouse is challenging. Hevo Data automatically maps the source schema with the destination warehouse so that you don’t face the pain of schema errors.
- 24×7 Customer Support: With Hevo Data, you get more than just a platform, you get a partner for your pipelines. Discover peace with round-the-clock “Live Chat” within the platform. Moreover, you get 24×7 support even during the 14-day full-feature free trial.
- Transparent Pricing: Say goodbye to complex and hidden pricing models. Hevo Data’s transparent pricing brings complete visibility to your ELT spending. Choose a plan based on your business needs. Stay in control with spend alerts and configurable credit limits for unforeseen spikes in the data flow.
What Can You Achieve by Replicating Your Data from Harvest to Databricks?
Here are 5 instances where replicating data from Harvest to Databricks can help your data analysts get critical business insights. Does your use case make the list?
- For which geography the client expenses is maximum in the last 3 months?
- What should the project workflow look like for a certain category?
- What is the average daily variation of all the users?
- Who are the significant contributors to a project?
- How to optimize your employees’ workflow?
Summing It Up
Exporting and importing CSV files would be the smoothest process when your HR & accounting teams require data from Harvest only once in a while. But what if the HR & accounting teams request data from multiple sources at a high frequency? Would you carry on with this method of manually importing & exporting CSV files from every other source? In this situation, you can choose to liberate your manual jobs by going for a custom ETL solution.
A custom ETL solution becomes necessary for real-time data demands such as monitoring email campaign performance or viewing the sales funnel. You can free your engineering bandwidth from these repetitive & resource-intensive tasks by selecting Hevo Data’s 150+ plug-and-play integrations (including 40+ free sources).
Visit our Website to Explore Hevo Data
Saving countless hours of manual data cleaning & standardizing, Hevo Data’s pre-load data transformations get it done in minutes via a simple drag n-drop interface or your custom python scripts. No need to go to your data warehouse for post-load transformations. You can run complex SQL transformations from the comfort of Hevo Data’s interface and get your data in the final analysis-ready form.
Want to take Hevo Data for a spin? Sign Up for a 14-day free trial and simplify your data integration process. Check out the pricing details to understand which plan fulfills all your business needs.
Share your experience of connecting Harvest to Databricks! Let us know in the comments section below!