‘Hey, how many customers do we receive through email? And, through which marketing channel do our customers rate our product the best?’ Do you have an answer to such questions from your team’s data analysts? If yes, great! If not, don’t worry. I’m here to help you with that. 

Replicating data from Freshsales to BigQuery will help you revert to them with accurate results. How to do that, you ask? In this blog, I will introduce you to two ways to connect Freshsales to BigQuery for data replication. I will also explain the detailed benefits of them.

Let’s get started!

Data Replication from Freshsales to BigQuery by Using CSV

Let’s go step by step. 

Step 1: Copy Data from Freshsales to CSV

  1. Select the icon on your left panel.
  2. Pick the category of reports (Ex: Leads, Contacts, Accounts, or Deals) you like to use. It shows a list of reports under that category.
  3. Select the report that you want to export.
  4. If you have to export the data generated by the report on the spot, select Export as CSV.
  5. If you want to customize the data in the report before you export it, select Edit Report.
  • Under Filters & Grouping, change the time period and the type of logic used (Quick or Advanced) by changing the filter conditions added.
  • To pick the columns you need in your exported CSV file, click on Simple as the table type.
    Under Grouping fields, click on the fields that you need as rows and pick the fields that you need as columns.
  • Select Preview Data when you finish that. This shows the table view.
  • Select Add field and pick the checkboxes which you require as columns in your CSV.
  • Select Save.
     
Copy Data from Freshsales to CSV
Copy Data from Freshsales to CSV
  1. Click Export as CSV once you save the report.
  2. You will receive an email with a link to download the CSV file when the download is complete. Freshsales also sends a flash notification to your account upon completion of the download. Therefore, you can also click the download link from there.

Now, the next step for Freshsales to BigQuery migration is to replicate the data from CSV to BigQuery.

Step 2: Data Migration from CSV to BigQuery

If you’re trying to load data from a local data source, you need to select one of the following to proceed:

  • Console
  • Bq
  • C#
  • Go
  • Java
  • Node.js
  • PHP
  • Python
  • Ruby

If you go for Console,

  1. Go to the Google Cloud console and get into the BigQuery page.
  2. Select a dataset in the Explorer panel by expanding your project.
  3. Expand the more_vert Actions option and select Open.
  4. Under the details panel, select Create table add_box.
  5. Now, navigate yourself to the  Source section in the Create table page:
    • For Create table from, click Upload.
    • For Select file, select Browse.
    • Search for the file, and select Open.
    • For File format, click CSV, JSON (newline delimited), Avro, Parquet, or ORC.
  6. Go to the  Destination section in Create table page:
    • For Project, select the right project.
    • For Dataset, select the appropriate dataset.
    • In the Table field, add the name of the table you’re making in BigQuery.
    • Verify that Native table is the Table type.
  7. Under the Schema section, add the schema definition.
    • If your files are CSV or JSON, you can allow schema auto-detect under Auto-detect. Schema information is self-explanatory in the source data for other supported file types.
    • You can also add schema information manually by:
      • Selecting Edit as text and adding the table schema as a JSON array
      • Utilizing Add Field to manually input the schema.
  8. Click applicable items in the Advanced options to get details about the available options. You can see CSV and JSON options.
  9. Select Create Table.

Here are a few limitations of using this method to load a file from a local source for Freshsales to BigQuery:

  • You cannot load files from a local data source if you use wildcards or comma-separated lists. Each file must be loaded separately.
  • The maximum size of files loaded from a local data source while using the Google Cloud console is 10 MB. Download larger files from cloud storage.

This is the case when the file is loaded from a local source. What about loading CSV data from Cloud Storage?

Loading CSV Data into a Table from Cloud Storage

Let’s take the case when you choose Console out of all options available to load CSV data from Cloud Storage into a new BigQuery table.

  1. Go to the BigQuery page In the Google Cloud console.
  2. In the Explorer panel, expand your project, and then click on a dataset.
  3. In the Dataset info section, select add_box Create table.
  4. In the Create table panel, add the following information:
    1. In the Source section, click on Google Cloud Storage in the Create table from the list. The next steps are:
      • Choose a file from the Cloud Storage bucket, or add the Cloud Storage URI. It doesn’t allow you to add multiple URIs in the Google Cloud console. Wildcards are supported though. The Cloud Storage bucket must be in the same location as the dataset that includes the table you need to generate, append, or overwrite.
Creating Table to Load CSV Data
    Creating Table to Load CSV Data
  • For File format, choose CSV.
  1. In the Destination section, add the following details:
    • For Dataset, add the dataset in which you want to create the table.
    • In the Table field, write the name of the table that you want to create.
    • Cross-check that the Table type field is Native table.
  2. In the Schema section, add the schema definition. To allow the auto-detection of a schema, choose Auto detect
  3. Select Advanced options. Under that:
    • For Write preference, keep Write if empty selected. This option generates a new table and uploads your data into it.
    • For Number of errors allowed, accept the default value of 0 or add the maximum number of rows including errors that you can ignore. If the number of rows with errors exceeds this digit, the job will end in an invalid message and fail. This option is only applicable to CSV and JSON files.
    • If you want to ignore values in a row that are not in the table’s schema, then choose Unknown values.
    • For Field delimiter, select the character that separates the cells in your CSV file: Comma, Tab, Pipe, or Custom. If you select Custom, add the delimiter in the Custom field delimiter box. The default value is Comma.
    • For Header rows to skip, add the number of header rows to skip at the top of the CSV file. The default number is 0.
    • For Quoted newlines, check Allow quoted newlines to allow quoted data sections that include newline characters in a CSV file. The default value is false.
    • For Jagged rows, you can check Allow jagged rows to accept rows in CSV files that don’t have trailing optional columns. The missing values are null. If you don’t check it, records with missing trailing columns are seen as bad records. And, if the bad records are many, an invalid error will be returned in the job result. Remember, the default value is false.
    • For Encryption, choose Customer-managed key to use a Cloud Key Management Service key. If you leave the Google-managed key setting, BigQuery encrypts the data at rest.
  4. Select Create table.

The limitations of loading CSV data from Cloud Storage to replicate data from Freshsales to BigQuery are:

  • The Cloud Storage bucket must be in the same region as the dataset or be a part of the same multi-region if your dataset’s location is set to a value other than the US multi-region.
  • For external data sources, BigQuery does not guarantee data consistency. While a query is running, changes to the underlying data can produce unexpected results.
  • BigQuery doesn’t support versioning cloud storage objects. The load process fails if the Cloud Storage URI contains a generation number.

You have seen the limitations of this method. Do you really want to replicate your data by bearing with these shortcomings? No. In the next section, let me introduce you to the savior method which doesn’t have any of the limitations mentioned above.

Using a Fully Automated Data Pipeline

You can use a fully automated data pipeline to replicate data from Freshsales to BigQuery. Here are a few advantages of using a fully automated pipeline:

  • No maintenance required: To build up your data pipelines, you only need to take the initial step of getting started with the pipeline. Your pipelines require no upkeep once they are set up, allowing you to concentrate solely on evaluating your unified data.
  • Enables scalability: With BigQuery and the data pipeline, you may scale up or down according to your business needs and feel certain about the scalability, performance, and cost requirements because both are designed using a horizontally scalable architecture.
  • Saves your ownership costs: Pay-as-you-go pricing that is clear and flexible allows you to predict costs. Additionally, you can cut your cost of ownership with Google BigQuery Data Warehouse.

Next, let me take you through the steps to use Hevo Data for replicating data from Freshsales to BigQuery.

Step 1: Configuring Freshsales as a Source

In the Configure your Freshsales Source page, specify the following:

Configuring Freshsales as a Source to Connect Freshsales to BigQuery
Configuring Freshsales as a Source to Connect Freshsales to BigQuery

Step 2: Configuring BigQuery as a Destination

In the Configure your Google BigQuery Warehouse page, specify the following details:

BigQuery Destination

You have seen how Hevo Data can help you connect Freshsales to BigQuery. Next, let’s come to the benefits of replicating data from Freshsales to BigQuery.

What Can You Achieve by Replicating Your Data from Freshsales to BigQuery?

You can help your data analysts in obtaining crucial business insights for the following situations by replicating data from Freshsales to BigQuery. Does the list include your use case?

  • How much of a region’s consumer inquiries come via email?
  • Which client acquisition method has generated the most support tickets?
  • What proportion of agents reply to tickets that customers submit over the communication channel?
  • Which marketing channel has the highest customer satisfaction ratings?

Summary

Replicating data from Freshsales to BigQuery is important to help data analysts take marketing decisions. There are mainly two ways to achieve that. The first method is by using CSV files. In this method, you can either load data from local storage or from Google cloud. 

Another method is using an automated data pipeline. There are options like Hevo Data which provide a fully no-code data pipeline for your needs. Consider the pros and cons of each method and choose the right one for your requirement.

Visit our Website to Explore Hevo Data

In case you want to integrate data into your desired Database/destination, then Hevo Data is the right choice for you! It will help simplify the ETL and management process of both the data sources and the data destinations. 

Want to take Hevo Data for a ride? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. Check out the pricing details to understand which plan fulfills all your business needs.

Anaswara Ramachandran
Content Marketing Specialist, Hevo Data

Anaswara is an engineer-turned-writer specializing in ML, AI, and data science content creation. As a Content Marketing Specialist at Hevo Data, she strategizes and executes content plans leveraging her expertise in data analysis, SEO, and BI tools. Anaswara adeptly utilizes tools like Google Analytics, SEMrush, and Power BI to deliver data-driven insights that power strategic marketing campaigns.