So, you’re a Contentful user, right? It’s nice to talk to someone that knows the importance of content and its optimization for their business. Your personnel can directly work on their content flows without having any coding knowledge & collaborating easily. That’s appreciable!

At times, there would be a need to move the data about operations done on your content from Contentful to a data warehouse, such as replicating data from Contentful to BigQuery. That’s where you come in. You take the responsibility of replicating data from Contentful to a centralized repository. By doing this, the analysts and key stakeholders can make super-fast business-critical decisions.

Give a high-five! We’ve prepared a simple and straightforward guide that will help you perform the data replication from Contentful to BigQuery.

Note that currently, Hevo doesn’t support Contentful as a Source.

How to Replicate Data From Contentful to BigQuery?

You have to run multiple exports for different types of data in Contentful. This will help replicate your data in JSON files.

Follow along to replicate data from Contentful to BigQuery in JSON format:

Step 1: Export data from Contentful

You can extract information about several kinds of operations in Contentful using webhooks. These operations can include: creating, publishing, archiving, etc. It also offers several REST APIs for accessing and manipulating content in Contentful.

  • In this example, the export is performed using the Contentful CLI tool. You need first to download the CLI tool in your system.
  • In your command line, run the following command: contentful space export [options].
  • The options that come as an output can be exported in a JSON file.
  • Now, you can run the export using the following command:
contentful space export --config example-config.json
  • The Contentful data will look similar to the following:
{
  "snapshot": {
    "name": "Landing Page",
    "fields": [
      {
        "id": "title",
        "name": "Title",
        "required": true,
        "localized": true,
        "type": "Text"
      },
      {
        "id": "body",
        "name": "Body",
        "required": true,
        "localized": true,
        "type": "Text"
      }
    ],
    "sys": {
      "firstPublishedAt": "2017-11-15T13:38:11.311Z",
      "publishedCounter": 2,
      "publishedAt": "2017-11-15T13:38:11.311Z",
      "publishedBy": {
        "sys": {
          "type": "Link",
          "linkType": "User",
          "id": "4FLrUHftHW3v2BLi9fzfjU"
        }
      },
      "publishedVersion": 9
    }
  },
  "sys": {
    "space": {
      "sys": {
        "type": "Link",
        "linkType": "Space",
        "id": "yadjklj1kx9rmg0"
      }
    },
    "type": "Snapshot",
    "id": "category",
    "createdBy": {
      "sys": {
        "type": "Link",
        "linkType": "User",
        "id": "4FLrUHfthjkHW3v2BLi9fzfjU"
      }
    },
    "createdAt": "2022-11-18T11:29:46.809Z",
    "snapshotType": "post",
    "snapshotEntityType": "ContentType"
  }
}

Step 2: Preparing the Data

Many a time, you won’t have a pre-defined data structure. In that case, you’ll have to create a schema for your data tables from scratch. In this case, for every value in the response, you will identify the datatypes and build a table that can receive them accordingly. You can follow the official documentation of Contentful to identify the fields and data types.

Step 3: Loading Data into BigQuery

You need to use the bq command-line tool, particularly the bq load command, to upload data to your datasets and define schema and data type information in Google BigQuery. You can refer to the following syntax.

bq --location=<LOCATION> load \
--source_format=<FORMAT> \
<DATASET.TABLE> \
<PATH_TO_SOURCE> \
<SCHEMA>

You can refer to the following documentation for any in-depth information on loading JSON data into Google BigQuery.

This process will successfully load your desired JSON files to Amazon BigQuery in a pretty straightforward way.

The above 3-step guide replicates data from Contentful to BigQuery effectively. It is optimal for the following scenarios:

  • One-Time Data Replication: This method suits your requirements if your business teams need the data only once in a while.
  • Limited Data Transformation Options: Manually transforming data in JSON files is difficult & time-consuming. Hence, it is ideal if the data in your JSON files is clean, standardized, and present in an analysis-ready form. 
  • Dedicated Personnel: If your organization has dedicated people who have to perform the manual downloading and uploading of JSON files, then accomplishing this task is not much of a headache.
  • Coding Knowledge: For doing this replication, you need to have some knowledge of writing bq commands in BigQuery.
Solve your data replication problems with Hevo’s reliable, no-code, automated pipelines with 150+ connectors.
Get your free trial right away!

However, with the increase in data sources, you would have to spend a significant portion of your engineering bandwidth creating new data connectors. To get to the part where you start your analysis, you need to formulate custom data transformations for filtering, cleaning & standardizing your data. As your data exponentially grows with your scaling business, more sources start coming in, so with it comes the requirements of building custom data pipelines for each source. 

A more effortless solution is opting for a No-Code solution that completely manages and maintains the data pipelines for you. Choosing a cloud-based Tool like Hevo allows you to focus completely on your business analysis without worrying about the data transfer process.

You can simply perform complex data transformations on the fly and quickly set up the data pipeline in a matter of minutes without any need for prior technical knowledge.

You can take our 14-day free trial to experience a better way to manage data pipelines.

Get started for Free with Hevo!

Summing It Up 

Exporting & uploading JSON files is your go-to solution when your data analysts require fresh data from Contentful only once in a while. The bq command tool allows you to copy data from a JSON file into BigQuery easily. This method is a good choice if you rarely need to copy data and require little to no data transformations. Though, when you need to frequently replicate data from multiple sources with complex transformations for complete business analysis, then Hevo is the right choice for you!

Now, you don’t need to bite the bullet and spend months developing & maintaining custom data pipelines. You can make all hassle go away in minutes by taking a ride with Hevo Data’s automated no-code data pipeline. 

Its 150+ plug-and-play native integrations will help you replicate data smoothly from multiple tools to a destination of your choice. Its intuitive UI will help you smoothly navigate through its interface. And with its pre-load transformation capabilities, you don’t even need to worry about manually finding errors and cleaning & standardizing them.

With a no-code data pipeline solution at your service, companies will spend less time calling APIs, referencing data, building pipelines, and more time gaining insights from their data.

Skeptical? Why not try Hevo for free and take the decision all by yourself? Using Hevo’s 14-day free trial feature, you can build a data pipeline from 150+ data sources to BigQuery and try out the experience.

Here’s a short video that will guide you through the process of building a data pipeline with Hevo.

We’ll see you again the next time you want to replicate data from yet another connector to your destination. That is if you haven’t switched to a no-code automated ETL tool already.

We hope you have found the appropriate answer to the query you were searching for. Happy to help!

mm
Former Research Analyst, Hevo Data

Manisha is a data analyst with experience in diverse data tools like Snowflake, Google BigQuery, SQL, and Looker. She has written more than 100 articles on diverse topics related to data industry.

No-code Data Pipeline for Google BigQuery

Get Started with Hevo