Data extraction from data sources has long been an issue for businesses all around the world. Elasticsearch is a popular free and open-source technology for building Ingestion pipelines. It functions as a search and analytics engine that can handle textual, numerical, geographic, structured, and unstructured data.

Ingestion with Elasticsearch Pipelines allows you to change data by applying basic transformations before indexing the document with a searchable reference. After the indexing is complete, you may use the ElasticSearch API to search for and get the document.

In this article, you will learn how to ingest data to Elasticsearch effectively. It covers different methods to ingest data to Elasticsearch.

Solve your data replication problems with Hevo’s reliable, no-code, automated pipelines with 150+ connectors.
Get your free trial right away!

Ingest Data to Elasticsearch – Top 4 Methods 

1) Ingest Data to Elasticsearch: Logstash

Logstash is an open and free server-side data processing pipeline platform that ingests data from multiple sources, transforms it and then sends it to a stash of your choice.

Steps to Ingest Data with Logstash

  • Step 1: Logstash can be installed using your package manager or by downloading the tgz/zip file and unzipping it.
  • Step 2: Install the RSS input plugin for Logstash, which allows you to read RSS data sources: install logstash-input-rss./bin/logstash-plugin.
  • Step 3: /elastic-rss.conf is used to copy the following Logstash pipeline definition to a new file:
input { 
  rss { 
    url => "/blog/feed" 
    interval => 120 
  } 
} 
filter { 
  mutate { 
    rename => [ "message", "blog_html" ] 
    copy => { "blog_html" => "blog_text" } 
    copy => { "published" => "@timestamp" } 
  } 
  mutate { 
    gsub => [  
      "blog_text", "<.*?>", "", 
      "blog_text", "[nt]", " " 
    ] 
    remove_field => [ "published", "author" ] 
  } 
} 
output { 
  stdout { 
    codec => dots 
  } 
  elasticsearch { 
    hosts => [ "https://<your-elsaticsearch-url>" ] 
    index => "elastic_blog" 
    user => "elastic" 
    password => "<your-elasticsearch-password>" 
  } 
}
  • Step 4: Modify the hosts and password parameters in the preceding file to match your Elasticsearch Service endpoint and elastic user password. The Elasticsearch endpoint URL can be found in the details of your deployment page in Elastic Cloud (Copy Endpoint URL).
Ingest data to Elasticsearch: Logstash step 4
Image Source
  • Step 5: Start Logstash and run the pipeline: ./bin/logstash -f /elastic-rss.conf. It will take several seconds for Logstash to start up. You’ll start seeing dots (…..) written on the console. Each dot indicates an Elasticsearch document that has been consumed.
  • Step 6: Start Kibana. To validate that 20 documents have been ingested, run the following commands in the Kibana Dev Tools Console: elastic_blog/_search POST.

2) Ingest Data to Elasticsearch: Language Clients

Ingesting data into Elasticsearch using language clients offers a more direct approach compared to Logstash. The steps include:

Step 1: Choose your language client:

  • The official Elasticsearch clients offer libraries for various languages like Java, Python, Node.js, Go, etc. Choose the one compatible with your application.

Step 2: Connect to Elasticsearch:

  • Establish a connection to your Elasticsearch cluster using the client library. This typically involves providing the host and port information.
  • Some clients may require additional authentication configuration like API keys or tokens.

Step 3: Prepare your data:

  • Format your data into a JSON document structure recognizable by Elasticsearch. This can involve constructing dictionaries or objects in your chosen language.
  • Ensure the data fields correspond to the mapping configuration of your target index in Elasticsearch.

Step 4: Index the data:

  • Use the client library’s specific indexing method to send your data document to Elasticsearch. This typically involves specifying the index name and the data itself.
  • Some clients offer bulk indexing capabilities for sending multiple documents in a single request.

Step 5: Handle errors and responses:

  • Check for any errors returned by the client library during indexing. Common errors may include connection issues, invalid data formatting, or permission problems.
  • Process the response received from Elasticsearch, which might contain information about the indexed document (ID, version, etc.).

3) Ingest Data to Elasticsearch: Kibana Dev Tools

The Kibana Dev Tools Console is a preferred tool for building and debugging Elasticsearch requests as well as to ingest data to Elasticsearch. Dev Tools exposes the entire power and flexibility of the generic Elasticsearch REST API while abstracting the underlying HTTP requests’ technicality. Unsurprisingly, you can PUT raw JSON objects into Elasticsearch using the Dev Tools Console: 

PUT my_first_index/_doc/1 
{ 
    "title" : "How to Ingest Into Elasticsearch Service", 
    "date" : "2019-08-15T14:12:12", 
    "description" : "This is an overview article about the various ways to ingest into Elasticsearch Service" 
}
What Makes Hevo’s Data Loading Process Unique

Hevo is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. With integration with 150+ Data Sources (40+ free sources), we help you not only export data from sources & load data to the destinations but also transform & enrich your data, & make it analysis-ready.

Start for free now!

Get Started with Hevo for Free

4) Ingest Data to Elasticsearch: Elastic Beats

Elastic Beats is a collection of lightweight data shippers for sending data to Elasticsearch Service. Beats have a low runtime overhead, allowing them to run and gather data on devices with minimal hardware resources, such as IoT devices, edge devices, or embedded devices. If you need to gather data but don’t have the capacity to run resource-intensive data collectors, Beats are the way to go. This type of ubiquitous data collecting across all of your networked devices enables you to swiftly discover and respond to system-wide faults and security incidents, for example.

Beats aren’t just for resource-constrained systems, though. They can also be employed on computers with more available hardware resources.

Steps to use Elastic Beats to ingest data to Elasticsearch

  • Step 1: Download and install the Beat of your choice. Most users prefer to either utilize the Elastic-supplied repositories for the operating system’s package manager (DEB/RPM) or just download and unzip the given tgz/zip bundles to install Beats.
  • Step 2: Configure the Beat and turn on any modules you like.
    • For example, If you installed via the package manager enable the Docker module using sudo metricbeat modules enable docker to collect metrics about Docker containers running on your system. Use /metricbeat modules enable docker instead of unzipping the tgz/zip bundle.
    • The Elasticsearch Service to which the collected data is transmitted is specified using the Cloud ID. In the Metricbeat configuration file (metricbeat.yml), add the Cloud ID and authentication information:
cloud.id: cluster_name:ZXVy...Q2Zg==
cloud.auth: "elastic:YOUR_PASSWORD"
  • Cloud.id and Cloud.auth are the credentials:
    • As previously stated, cloud.id was issued to you when your cluster was created. A login and password that have been granted adequate privileges within the Elasticsearch cluster are concatenated with cloud.auth.
    • Use the elastic superuser and the password provided during cluster formation to get started quickly. If you used the package manager to install, the configuration file is in the /etc/metricbeat directory; if you used the tgz/zip bundle, it is in the unzipped directory.
  • Step 3: Pre-made dashboards can be loaded into Kibana. Most Beats and modules provide pre-built Kibana dashboards. If you used the package manager, load them into Kibana with sudo metricbeat setup; if you used the tgz/zip bundle, load them with ./metricbeat setup in the unzipped directory.
  • Step 4: Follow the beat. If you installed using your package management on a systemd-based Linux system, use sudo systemctl start metricbeat; if you installed using the tgz/zip bundle, use./metricbeat -e.

If everything goes well, data will begin to ingest data to Elasticsearch Service.

What are Ingest Pipelines?

Before indexing (adding a searchable reference to the document), the ingestion pipeline allows you to conduct typical modifications on your data. You can use an Elasticsearch pipeline to drop fields, get values from text, and enrich your data, for example. Processors are a set of configurable jobs in the Elasticsearch Ingest pipeline. Each processor operates in turn, altering the input document in some way. Elasticsearch stores the transformed document in the data stream or index after the processor completes. To develop and manage Elasticsearch Ingest pipelines, use Kibana’s Ingest Pipelines functionality or the Ingest API. The pipeline is saved in a cluster state by Elasticsearch.

What are Ingest Pipelines used for?

Do you want to make some changes to your data but don’t want to utilize Logstash or another data analysis tool? Elasticsearch Ingest Pipelines may be a viable option for you. These Elasticsearch Ingest Pipelines let you customize your data to your specific requirements with minimal effort. The Elasticsearch Ingest pipeline runs on the Elasticsearch node (or the ingestion node, if one is specified) and performs a sequence of operations on the defined data.

Consider the following data structure to better understand how Elasticsearch Ingest Pipelines work:

{
  "id": 5,
  "first_name": "Karl",
  "last_name": "Max",
  "email": "kmax@sample.com",
  "ip_address": "100.1.193.2",
  "activities": "Cricket, Dancing, Reading"
}, 
{
  "id": 6,
  "first_name": "Jim",
  "last_name": "Sanders",
  "email": "jsanders@example.com",
  "ip_address": "122.100.4.22",
  "activities": "Driving, Running, Cycling"
}

Using the above two data structures, you can do the following operations with the Elasticsearch Ingest Pipelines:

  • Rename Fields: the name of the field “first name” to “firstName.”
  • Remove Fields: The field ’email’ has been removed.
  • Split Fields: Instead of using a string, you can use a separator to turn a value into an array. For example, instead of “Cricket, Dancing, Reading,” change “activities” to [“Cricket,” “Dancing,” “Reading”]
  • Lookup a field’s GeoIP.
  • For extra flexibility, run a script: Encoding sensitive data or merging two fields and separating into another.
  • Convert Fields: Changing the type of a field from string to integer.
  • Enhance documents: Use a lookup to add further information to each event, such as “More information underneath.”

When importing data in its raw form, you can also use the Elasticsearch Ingest Pipelines, as demonstrated below:

2022-04-20T18:17:11.003Z new.sample.com There seems to be a problem

2021-05-11T18:17:11.003Z,new.sample.com, There seems to be a problem

The following operations can be used to alter this data:

  • Using grok or dissect to separate fields: “2022-04-20…” in the “date” column; “new.sample.com” in the “origin” field; and “There appears to be an issue” in the “raw message” field.
  • Creating fields from a csv: Use a comma as a separator in the second sample, and label the first value “date,” the second “origin,” and the third “raw message.”

Ingest Data to Elasticsearch: Best Practices 

The following are the best practices to be carried out to ingest data to Elasticsearch:

  • Create a data architecture that meets your requirements. The default hot tier Elasticsearch nodes in your Elasticsearch Service deployment hold your most frequently requested data. You can add warm, cold, and frozen data levels, as well as automated data deletion, based on your own access and retention requirements.
  • Take regular backup snapshots and make your data highly available for production environments or other key data repositories.
  • Existing data can be migrated and uploaded into your deployment.
  • For new data sources, add incoming integrations, you have the option of using Elastic’s integrations or creating your own:
    • Check out the Elastic integrations page to see what Elastic has to offer.
    • Check out these tutorials to integrate with Cloud Service Provider log and metric services: AWS, GCP, and Azure.
    • Choose an ingestion method to write your own.
  • By using the Elastic Common Schema, you may better analyze, visualize, and correlate your events (ECS). Elastic integrations come pre-configured with ECS. ECS is recommended if you’re developing your own integrations.
  • You may control the index lifetime once your data layers are deployed and data is flowing.

Handling Ingest Pipeline Failures

You can use the following options to handle ElasticSearch Ingest Pipeline faults or failures efficiently:

  • The processor follows a set of instructions and will stop if an error occurs. If you set ignore_failure to true, the processor failure will be ignored and the pipeline’s remaining processors will be executed.
PUT _ingest/pipeline/sample-pipeline
{
  "processors": [
    {
      "rename": {
        "description": "Rename 'provider' to 'cloud.provider'",
        "field": "provider",
        "target_field": "cloud.provider",
        "ignore_failure": true
      }
    }
  ]
}
  • The on_failure argument defines a list of processors that will be performed as soon as one of them fails. Even if the on failure configuration is empty, Elasticsearch will run the rest of the processors in the ElasticSearch Ingest pipeline if you have defined the on failure.
PUT _ingest/pipeline/sample-pipeline
{
  "processors": [
    {
      "rename": {
        "description": "Rename 'provider' to 'cloud.provider'",
        "field": "provider",
        "target_field": "cloud.provider",
        "on_failure": [
          {
            "set": {
              "description": "Set 'error.message'",
              "field": "error.message",
              "value": "Field 'provider' does not exist. Cannot rename to 'cloud.provider'",
              "override": false
            }
          }
        ]
      }
    }
  ]
}
  • You can use nest a list of on_failure processors to enable nested error handling.
PUT _ingest/pipeline/sample-pipeline
{
  "processors": [
    {
      "rename": {
        "description": "Rename 'provider' to 'cloud.provider'",
        "field": "provider",
        "target_field": "cloud.provider",
        "on_failure": [
          {
            "set": {
              "description": "Set 'error.message'",
              "field": "error.message",
              "value": "Field 'provider' does not exist. Cannot rename to 'cloud.provider'",
              "override": false,
              "on_failure": [
                {
                  "set": {
                    "description": "Set 'error.message.multi'",
                    "field": "error.message.multi",
                    "value": "Document encountered multiple ingest errors",
                    "override": true
                  }
                }
              ]
            }
          }
        ]
      }
    }
  ]
}
  • You can also specify on_failure in the pipeline, in addition to the processor level. Elasticsearch substitutes this pipeline level parameter for the on failure value if the processor fails without it. Elasticsearch will no longer attempt to execute the remaining processors in the ElasticSearch Ingest pipeline.
PUT _ingest/pipeline/my-pipeline
{
  "processors": [ ... ],
  "on_failure": [
    {
      "set": {
        "description": "Index document to 'failed-<index>'",
        "field": "_index",
        "value": "failed-{{{ _index }}}"
      }
    }
  ]
}
  • The document metadata columns on_failure_message, on_failure_processor_type, on_failure_processor_tag, and on_failure_pipeline can be used to retrieve specific information about the ElasticSearch Ingest pipeline failure. These fields are only accessible from within an on_failure block. For example, the code below includes information about ElasticSearch Ingest pipeline errors in documents using metadata fields.
PUT _ingest/pipeline/sample-pipeline
{
  "processors": [ ... ],
  "on_failure": [
    {
      "set": {
        "description": "Record error information",
        "field": "error_information",
        "value": "Processor {{ _ingest.on_failure_processor_type }} with tag {{ _ingest.on_failure_processor_tag }} in pipeline {{ _ingest.on_failure_pipeline }} failed with message {{ _ingest.on_failure_message }}"
      }
    }
  ]
}

Conclusion

In this article, you learned how to ingest data to Elasticsearch. You can either use Kibana, a sophisticated Data Visualization tool, to establish a pipeline using its user-friendly interface, or send a create pipeline API call for a more technical approach. After you’ve configured your ElasticSearch Ingest Pipeline, you can test it by running it with an example document. You can apply conditions to both processors and ElasticSearch Ingest pipelines using the if statement. To efficiently manage processor faults, you can use parameters like on_failure and ignore_failure.

It’s critical to combine data collected and managed across multiple applications and databases in your business for a comprehensive performance review. Continuously monitoring the Data Connectors, on the other hand, is a time-consuming and resource-intensive task. To do so effectively, set aside some of your technical bandwidth to integrate data from all sources, clean and transform it, and then load it into a Cloud Data Warehouse, BI Tool, or other destination for further Business Analytics. A Cloud-based ETL tool, such as Hevo Data, can easily solve all of these problems.

Visit our Website to Explore Hevo

Hevo Data, a No-code Data Pipeline provides you with a consistent and reliable solution to manage data transfer between a variety of sources and a wide variety of Desired Destinations, with a few clicks. Hevo Data with its strong integration with 150+ sources (including 40+ free sources) allows you to not only export data from your desired data sources & load it to the destination of your choice, but also transform & enrich your data to make it analysis-ready so that you can focus on your key business needs and perform insightful analysis using BI tools.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at the unbeatable pricing that will help you choose the right plan for your business needs.

Share your experience of learning how to ingest data to Elasticsearch! Let us know in the comments section below!

Harsh Varshney
Research Analyst, Hevo Data

Harsh is a data enthusiast with over 2.5 years of experience in research analysis and software development. He is passionate about translating complex technical concepts into clear and engaging content. His expertise in data integration and infrastructure shines through his 100+ published articles, helping data practitioners solve challenges related to data engineering.

No-code Data Pipeline For Your Data Warehouse