Do you want to transfer your Elasticsearch data to Google BigQuery? Are you finding it challenging to connect Elasticsearch to BigQuery? If yes, then you’ve landed at the right place! This article will answer all your queries & relieve you of the stress of finding a truly efficient solution. Follow this step-by-step guide to master the skill of efficiently transferring your data to Google BigQuery from Elasticsearch.

It will help you take charge in a hassle-free way without compromising efficiency. This article aims at making the data export process as smooth as possible.

Upon a complete walkthrough of the content, you will be able to successfully set up a connection between Elasticsearch & Google BigQuery to seamlessly transfer data to Google BigQuery for a fruitful analysis in real-time. It will further help you build a customized ETL pipeline for your organization. Through this article, you will get a deep understanding of the tools and techniques & thus, it will help you hone your skills further.

Ways to Connect Elasticsearch to BigQuery

Method 1: Using Hevo Data, a No-code Data Pipeline

A fully managed, No-code Data Pipeline platform like Hevo Data helps you load data from Elasticsearch (among 150+ Data Sources) to Google BigQuery in real-time, in an effortless manner. Hevo, with its minimal learning curve, can be set up in a matter of minutes, making the users ready to load data without compromising performance. Its strong integration with various sources, such as databases, files, analytics engines, etc., gives users the flexibility to bring in data of all different kinds in a way that’s as smooth as possible without having to write a single line of code.

Get started with hevo for free

Method 2: Using Apache Airflow & Google Dataflow to Connect Elasticsearch to BigQuery 

Using Google Dataflow and Apache Airflow & Beam is another method to establish a connection between Elasticsearch & Google BigQuery. This method requires integrating your Elasticsearch cluster and Google Cloud Project using a VPC network & NAT gateway. You will then need to write Kotlin & Python-based custom code snippets to carry out the ETL job using the ElasticsearchIO & BigQueryIO classes of the Apache Beam SDK.

Methods to Connect Elasticsearch to BigQuery

There are multiple ways in which you can transfer data from Elasticsearch to BigQuery:

Method 1: Using Hevo Data, a No-code Data Pipeline

Elasticsearch to BigQuery: Hevo Logo.

Hevo is fully managed and completely automates the process of not only loading data from your desired source but also enriching the data and transforming it into an analysis-ready form without having to write a single line of code. Its fault-tolerant architecture ensures that the data is handled in a secure, consistent manner with zero data loss.

Steps to use Hevo Data:

Hevo Data focuses on two simple steps to get you started:

Step 1: Configure Elasticsearch as your Source

  • Click PIPELINES in the Navigation Bar.
  • Click + CREATE in the Pipelines List View.
  • In the Select Source Type page, select Elasticsearch.
  • In the Configure your Elasticsearch Source page, specify the following:
Elasticsearch to BigQuery
Image Source

Step 2: Set BigQuery as your Destination

  • Click DESTINATIONS in the Navigation Bar.
  • Click + CREATE in the Destinations List View.
  • In Add Destination page select Google BigQuery as the Destination type.
  • In the Configure your Google BigQuery Warehouse page, specify the following details:
Elasticsearch to BigQuery
Image source

To know more about BigQuery Elasticsearch integration, refer to Hevo documentation:

Simplify your data analysis with Hevo today and Sign up here for a 14-day free trial!.

Method 2: Using Apache Airflow & Google Dataflow to Connect Elasticsearch to BigQuery

Setting up a custom ETL pipeline that will help you transfer data from Elasticsearch to BigQuery requires you to use a VPC network and NAT gateway to set up a connection between your Google Cloud Project & Elasticsearch clusters. You will then use ElasticsearchIO & BigQueryIO classes to fetch and import data.

Elasticsearch to BigQuery.
Image Source

This method can be implemented using the following steps:

Step 1: Setting up a VPC for Google Dataflow

To start loading data from Elasticsearch to BigQuery, you will first have to establish a connection between your Elasticsearch cluster and your Google Cloud Project. For Elasticsearch clusters that are running on Amazon Web Services EC2, you will further have to use the AWS DNS service, known as Route 53, which will help you associate a particular domain to your Elasticsearch cluster. 

To do this, go to the official AWS Console website and log in with your credentials such as username and password. 

Elasticsearch to BigQuery: AWS Console Login.
Image Source: Self

Once you’ve logged in, go to the Route 53 console and select the hosted zones option from the panel on the left. Click on the create a recordset option and configure your recordset as follows:

Elasticsearch to BigQuery: Creating a Record Set.
Image Source: Self

With your recordset now ready, you need to figure out the Google Dataflow IP address that will access your Elasticsearch cluster. Since Google Dataflows use ad-hoc VMs, to keep track of IPs & worker VMs, you will have to create a VPC for your data pipeline handling connections with Google Cloud Projects.

To do this, go to the official website of Google Cloud Platform and log in using your Google credentials such as username and password. Once you’ve logged in, go to the page of VPC networks, and click on the create VPC network option.

A new dialogue box will now open up on your screen, where you need to provide the name, description, and custom subnet configurations for your VPC network. You can do this as follows:

Elasticsearch to BigQuery: Creating a VPC Network.
Image Source: Self

This is how you can set up a VPC for Google Dataflow.

Step 2: Linking the Google Dataflow VPC to a NAT Gateway

With your VPC set up for your Google Cloud Projects, you now need to link it with a cloud-based NAT gateway. It will ensure that any traffic from your VPC will only pass through your NAT gateway’s static IPs. To do this, go to the official website of Google Cloud Console and log in with your credentials such as username and password.

Elasticsearch to BigQuery: Google Cloud Login.
Image Source: Self

Once you’ve logged in, go to the Cloud NAT page and click on the Create NAT gateway option. A new dialogue box will now open up on your screen, where you need to provide the following information:

Elasticsearch to BigQuery: Creating a NAT Gateway.
Image Source: Self
  • Gateway Name: Enter a name for your gateway.
  • VPC network: Select your VPC network, from the drop-down list.
  • Region: Select a world region for your gateway.
  • Cloud-based Router: You can either select/create a cloud router.

Once you’ve added all the necessary details, click on the Create option. With your NAT gateway now set up, go to the EC2 console and add a new security group for your Elasticsearch cluster, that will help accept incoming traffic only from the Google Dataflow IPs.

Elasticsearch to BigQuery: TCP NAT Gateway Set up.
Image Source: Self

This is how you can link your VPC network for Google Cloud Project with NAT gateways.

Step 3: Using ElasticsearchIO to Fetch Data from Elasticsearch

With your Elasticsearch cluster & GCP connection now up and running, you can start fetching data from your Elasticsearch database. To do this, you can make use of the ElasticsearchIO.read() function of the Elasticsearch IO class, available in the SDK of Apache Beam as follows:

private fun readFromES(argMap: HashMap<String, String>, query: String): ElasticsearchIO.Read? {
return ElasticsearchIO.read()         .withConnectionConfiguration(ElasticsearchIO.ConnectionConfiguration
.create(arrayOf(argMap["source"]), argMap["index"], argMap["type"]).withConnectTimeout(argMap["connectTimeout"]!!.toInt()               .withSocketAndRetryTimeout(argMap["socketAndRetryTimeout"]!!.toInt()
)
.withBatchSize(argMap["batchSize"]!!.toLong())
.withQuery(query)
}

In case you don’t have the Apache Beam SDK installed on your system, you can download it from here.

The ElasticsearchIO.read() function helps you fetch data from Elasticsearch in the form of JSON documents. It has various parameters such as source, socket, index_type, etc. that you can configure as per your Elasticsearch database. You will also need to pass a data query as an argument that will help fetch your data.

This is how you can start fetching data from Elasticsearch as JSON documents to connect Elasticsearch to BigQuery.

Are you tired of following a tedious setup process? Click here to check out the method of using Hevo Data that will let you integrate Elasticsearch to BigQuery in an automated and hassle-free manner!

Step 4: Configuring & Transforming Elasticsearch Documents

Once you’ve fetched your Elasticsearch data, you now need to convert your JSON documents into the form that Google BigQuery tables support, that is database records. To do this, you can use Gson, to parse your JSON document and map it into the desired schema, based on numerous relevant keys. It will then fetch the data values and return them as a “TableRecord” object.

You can use the following lines of code to map your Elasticsearch JSON document into a “TableRecord” object that Google BigQuery supports:

private fun mapToTableRow() = MapElements.into(TypeDescriptor.of(TableRow::class.java))
        .via(ContactStringToTableRow())
class ContactStringToTableRow : SimpleFunction<String, TableRow>() {
    override fun apply(input: String): TableRow {
        val gson: Gson = GsonBuilder().create()
        val parsedMap: Map<String, Any> = gson.fromJson(input, object : TypeToken<Map<String, Any>>() {}.type)
        return TableRow()
                .set("id", parsedMap["id"].toString().toDouble().toLong())
                .set("first_name", parsedMap["first_name"].toString())
                .set("last_name", parsedMap["last_name"].toString())
                .set("address", parsedMap["address"].toString())
                .set("birthday", ISODateTimeFormat.dateTime().print((parsedMap["birthday"].toString().toDouble().toLong())))
                .set("person_json", input)
                .set("created", ISODateTimeFormat.dateTime().print((parsedMap["created"].toString().toDouble().toLong())))
                .set("modified", ISODateTimeFormat.dateTime().print((parsedMap["modified"].toString().toDouble().toLong())))
    }
}

This is how you can transform your Elasticsearch JSON data to connect Elasticsearch to BigQuery.

Step 5: Loading data from Elasticsearch to BigQuery

Once you’ve transformed your Elasticsearch data, you can now make use of the BigQueryIO.writeTable() function of the BigQueryIO, available in the SDK of Apache Beam as follows:

private fun writeToBQ(tableReference: TableReference?, fields: ArrayList<TableFieldSchema>): BigQueryIO.Write<TableRow>? {
    return BigQueryIO.writeTableRows()
            .to(tableReference)
            .optimizedWrites()
            .withSchema(TableSchema().setFields(fields))  .withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)          .withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)
}
Elasticsearch to BigQuery: Loading Job from into Google BigQuery.
Image Source: Self

With your data pipeline now ready, every time your data pipelines execute, they will add a new job to Google Dataflow. In case you want to know more about a job, you can click on its name to retrieve information such as auto-scaling, metrics, logs, etc.

Elasticsearch to BigQuery: Google BigQuery Job Summary.
Image Source: Self

This is how you can connect Elasticsearch to BigQuery using Google Dataflow and Apache Airflow & Beam.

Step 6: Scheduling Data Pipeline Execution with Apache Airflow (Optional Step)

In case you want your data pipeline to execute at scheduled intervals of time, you can make use of Apache Airflow & its DAGs to achieve this functionality. You can refer to the official documentation of Apache Airflow here, which will help you get started with the installation and setup process of Apache Airflow.

With Apache Airflow in place, a relevant DAG trigger at a pre-defined interval of time will execute the jar files based on the configuration parameters of your DAG, which will help start the ETL job. To do this, you can use the following lines of code:

dag = DAG('dataPipelineFromElasticsearchToBigquery-dag', default_args=dag_args, catchup=False)
start = DummyOperator(task_id='start', dag=dag)
task = DataFlowJavaOperator(
    task_id='daily-dataPipelineFromElasticsearchToBigquery-task',  jar='gs://dataPipelineFromElasticsearchToBigquery/lib/dataPipelineFromElasticsearchToBigquery.jar',
    options=all_options,
    dag=dag)
start >> task

This is how you can set up interval-based execution of your data pipeline using Apache Airflow.

Limitations of Integrating Elasticsearch to Google BigQuery using Google Dataflows and Apache Airflow & Beam

  • Integrating Elasticsearch with Google BigQuery using Apache Beam & Google Dataflow requires you to write custom Kotlin-based code to fetch, transform, and then load data. Hence, you must have strong technical knowledge. 
  •  Setting up a dedicated VPC network, NAT gateways, etc., can be a challenging task, especially for beginners as it requires you to have a deep understanding of how IP addresses & subnetting work.
  • You must ensure that you provide all correct parameters such as table name, schema, etc. as even a small error can result in the ETL process failing.
  • It requires you to have a general idea of how different services such as Apache Airflow, Beam, Google Dataflow, etc. work, resulting in a bottleneck, as many might not be aware of their operations.
Solve your data replication problems with Hevo’s reliable, no-code, automated pipelines with 150+ connectors.
Get your free trial right away!

Use Cases of BigQuery Elasticsearch Integration

  • Data Analytics: By using Elasticsearch connector for BigQuery, you can analyze your business data in real-time. BigQuery’s Machine Learning features enable you to generate insightful analytics about customers, campaigns, and marketing pipelines. 
  • Data Storage: Building ETL pipelines for Elasticsearch BigQuery makes the data storage and transformation process easier. You can store large amounts of data, for example, marketing data from multiple sources, in a centralized cloud-based location such as BigQuery. You can access and query the data without expensive storage hardware.

Conclusion

This article teaches you how to connect Elasticsearch to BigQuery with ease. It provides in-depth knowledge about the concepts behind every step to help you understand and implement them efficiently. These methods, however, can be challenging, especially for a beginner & this is where Hevo saves the day. Hevo Data, a No-code Data Pipeline, helps you transfer data from a source of your choice in a fully-automated and secure manner without having to write the code repeatedly. This articles also sheds light on some BigQuery connector for Elasticsearch use cases. 

visit our website to explore hevo

Want to take Hevo for a spin? sign up and experience the feature-rich Hevo suite first hand. You can also have a look at our unbeatable pricing that will help you choose the right plan for your business needs!

Tell us about your experience of connecting Elasticsearch to BigQuery! Share your thoughts in the comments section below!

Divij Chawla
Former Marketing Operations and Analytics Manager, Hevo Data

Divij Chawla worked with Hevo in driving the Marketing Operations and Analytics team. He has a keen interest in data analysis, data, software architecture, and the creation of technical content related to the realm.

mm
Associate Customer Experience, Hevo Data

Parthiv, proficient in MongoDb, Mysql, Rest API, and Snowflake, elevates Hevo's customer experience by prioritizing feedback, proactive support, and feature advocacy. Committed to continuous improvement, Parthiv ensures every interaction reflects Hevo's dedication to excellence.

No-code Data Pipeline For Google BigQuery