Elasticsearch REST API: 5 Comprehensive Aspects

Talha • Last Modified: December 29th, 2022

Elasticsearch Rest API FI | Hevo Data

One of the best things about Elasticsearch REST (Representational State Transfer) API (Application Programming Interface) is that it allows you to integrate, manage, and query the indexed data in a variety of ways. Elasticsearch REST API can be called directly or programmatically to give you access to its features. Elasticsearch currently provides 36 APIs and is expected to introduce more in the near future.

The most commonly used Elasticsearch REST APIs include Document API, Search API, Index API, SQL API, and Cluster API. Indexing is at the heart of Elasticsearch. It’s the feature behind the super-fast searches across terabytes of data. But you can’t perform searches on data that doesn’t exist.

This article will guide you through the inner workings of the Elasticsearch REST API in a language-agnostic way. You will get to know about the basic API methods such as creating indexes, ingesting data as well as performing searches using Elasticsearch REST API in further sections. Let’s get started.

Tables of Contents

What is Elasticsearch?

Elasticsearch is a free and open-source, search engine and analytics suite that is at the heart of the Elastic (ELK) Stack. It was initially developed by Shay Banon in the mid-2000s and is today maintained by the Elasticsearch B.V. Group. Elasticsearch simplifies how you perform actions such as Application Searches, Website Searches, Enterprise Searches, Application Monitoring, Clickstream Analysis, Log Analysis, Business Analytics, Operational Intelligence, and many more. 

Data in Elasticsearch is primarily stored in schema-less JSON (Javascript Object Notation) format. But regardless of whether you’re working with structured or unstructured data types, textual data, numerical data, or geospatial data, you can use Elasticsearch to store and index the data in a way that facilitates blazing-fast searches. When working with Elasticsearch, you’re not only limited to data collection, but you can also use it to aggregate information and gain some observability of trends and patterns in your data.

You can use it in the Cloud by deploying the Elasticsearch Service on Elastic Cloud, or on your own environment by running their Docker images. The managed Cloud service is by far the most scalable. The distributed architectural design of the managed offering, Elastic Cloud, allows your deployment to scale seamlessly with the surge in the volume of data and queries.

To know more about Elasticsearch, visit this link.

Key Features and Benefits of Using Elasticsearch

Along with the aforementioned advantages, Elasticsearch also packs loads of other interesting features like:

  • Auto-completion: We’ve all experienced it. Once we start typing something in Google, the search box automatically displays suggestions based on what we are trying to search. Elasticsearch also supports an auto-completion feature for merchants to host on their E-commerce website so customers can easily see and browse for the available products without the hassle of pressing an enter every time. 
  • High Performance: Elasticsearch can manage and analyze petabytes of data in parallel, allowing it to quickly find the best matches for your searches. Elasticsearch’s Lucene architecture dramatically decreases latency from the moment a document is indexed until it becomes searchable.
  • Elasticsearch Shards to avoid Hardware Failure: The documents stored in Elasticsearch are distributed across different containers known as shards, which are duplicated to provide redundant copies of the data in case of hardware failure. The distributed nature of Elasticsearch allows it to scale out to hundreds (or even thousands) of servers and handle petabytes of data.
  • Easy Application Deployment: Elasticsearch supports a wide range of programming languages, including Java, Python, PHP, JavaScript, Node.js, Ruby, and many others. Its basic REST-based APIs, HTTP interface, and usage of schema-free JSON documents make it simple to get started and quickly build apps for a range of use-cases.
  • Complimentary Tools and Plugins: Elasticsearch is pre-integrated with Kibana, a popular visualization, and reporting tool. It also integrates with Beats and Logstash, allowing you to simply modify and load source data into your Elasticsearch cluster. To add additional functionality to your apps, you can also employ a number of open-source Elasticsearch plugins such as language analyzers and suggestions.

To learn more about Elasticsearch and its features, visit Elastic’s official website here- Elasticsearch features.

What are the Key Use Cases of Elasticsearch?

Elasticsearch can be used in a wide variety of use cases due to the speed and flexibility that it provides in handling the data. Developers have found novel ways of using Elasticsearch, such as:

  • Elasticsearch can be used to add a search box to an app or website for providing full-text hosted search functionality. GitHub, StackOverflow, The Guardian, and even Wikipedia are a few examples of big enterprises that use Elasticsearch.
  • Elasticsearch can be used to store and analyze logs and metrics. 
  • You can use Machine Learning to automatically model the behavior of your data in real-time. This can help you in forecasting trends as well as in identifying anomalies and outliers in your data. 
  • Elasticsearch is a Geographic Information System (GIS) that can be used to store and analyze Geospatial data.

Simplify Data Analysis Using Hevo’s No-code Data Pipeline

Hevo Data helps you directly transfer data from 100+ data sources (including 40+ free sources) like Elasticsearch to Business Intelligence tools, Data Warehouses, or a destination of your choice in a completely hassle-free & automated manner. Hevo is fully managed and completely automates the process of not only loading data from your desired source but also enriching the data and transforming it into an analysis-ready form without having to write a single line of code. Its fault-tolerant architecture ensures that the data is handled in a secure, consistent manner with zero data loss.

Hevo takes care of all your data preprocessing needs required to set up the integration and lets you focus on key business activities and draw a much more powerful insight on how to generate more leads, retain customers, and take your business to new heights of profitability. It provides a consistent & reliable solution to manage data in real-time and always have analysis-ready data in your desired destination.

Get Started with Hevo for Free

Check out what makes Hevo amazing:

  • Secure: Hevo has a fault-tolerant architecture that ensures that the data is handled in a secure, consistent manner with zero data loss.
  • Schema Management: Hevo takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the destination schema.
  • Minimal Learning: Hevo, with its simple and interactive UI, is extremely simple for new customers to work on and perform operations.
  • Hevo Is Built To Scale: As the number of sources and the volume of your data grows, Hevo scales horizontally, handling millions of records per minute with very little latency.
  • Incremental Data Load: Hevo allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
  • Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, E-Mail, and support calls.Live Monitoring: Hevo allows you to monitor the data flow and check where your data is at a particular point in time.
Sign up here for a 14-Day Free Trial!

What are the Conventions in Elasticsearch REST API?

Below are the conventions associated with Elasticsearch REST API:

  • Elasticsearch exposes its REST APIs over HTTP (Hypertext Transfer Protocol).
  • A good number of the GET APIs in Elasticsearch support sending a payload body on a GET request. The GET request is widely used in the context of soliciting information. A good example is the Search API.
  • Passing a payload body together with a GET request is against the REST-style approach and is not supported by all HTTP libraries.
  • All Elasticsearch GET APIs that require a payload body can alternatively be submitted as POST requests.

How to Ingest Data with Elasticsearch REST API?

Elasticsearch indices stored data in JSON format(objects). You can also explicitly create an index, or else Elasticsearch will by default create an index around the first document that is added. This feature allows you to add new documents to an index while being oblivious about whether the file is in there or not. If the index doesn’t exist, a new one will be created.

1) Ingesting a Document with Elasticsearch REST API

Let’s begin by adding a document into an index using the HTTP PUT method. You can follow the below steps to ingest a document using the Elasticsearch REST API:

Step 1: Put the Document into the Index

The HTTP method for creating new resources is PUT request. You will use the PUT request to create a new index and document in Elasticsearch. To follow along, you can use Curl, Postman, the Dev console in Kibana, or any other HTTP tool. This guide uses the Dev console in Kibana, therefore adjust accordingly based on your tool of choice.

To create an index with a new document, make the following HTTP call:

PUT /animals/_doc/1
{
  "name":"cat",
  "color":"white"
}

This endpoint creates an index named animals and puts a single document into the index with an ID of 1. Quite simple right?

The _doc part is a mapping type that is used to separate collections inside the same searchable index. However, this feature will soon be deprecated. It indicates the type of the document. In this context, a type is a JSON object.

Earlier versions of Elasticsearch had support for multiple types of documents in the same index. You could have an animal index with types like _mammals, _reptiles, _insects, and _birds each with a different structure. Unfortunately, this also hinders search performance, that is why types are slowly being phased out of Elasticsearch. It’s much more efficient to have an index for each type, like this: /mammals/_doc, /reptiles/_doc, /insects/_doc and /birds/_doc.

Step 2: Generate ID for the Document

You don’t have to explicitly declare your ID. Elasticsearch can generate an ID for you. To automatically generate an ID in Elasticsearch, you just need to use a POST instead of a PUT.

POST /mammals/_doc
{
  "name":"buffalo",
  "color":"black",
  "classification":"herbivore"
}

The POST HTTP call creates an index named mammals and it adds the document (JSON object) to the index. It also auto-generates an ID for the document. You probably noticed that the example didn’t provide any parameter after _doc in the URL. Normally, this is where you declare your ID, but since you’re creating a document with a generated ID, it’s not necessary for you to provide one.

Step 3: Update the Document with the POST Method

However, unlike in the previous example, you have to explicitly declare the ID when updating an existing document. In the following example, you will use an HTTP POST request with an identifier to update an existing document.

You can create a document with the ID 23, as follows:

POST /mammals/_doc/23
{
  "name":"lion",
  "color":"ochre",
  "classification":"carnivore"
}

Then you specify that ID when updating the document, like this:

POST /mammals/_doc/23
{
  "name":"crow",
  "color":"black",
  "classification":"bird"
}

This updates the document with the new name “crow” classification, color “black”, and value “bird”. Suppose that you’re trying to update a document that doesn’t exist in the index. What do you think will happen? Well, to answer it for you, Elasticsearch will create a new document.

Let’s go over the commands that we have used thus far:

  • PUT creates a document with a specified ID.
  • POST updates the document with the specified ID.
  • POST also creates a document with an auto-generated ID if one is not provided.

So, by following the above methods, you can easily ingest documents with Elasticsearch REST API.

2) Ingesting Bulk Data with Elasticsearch REST API

Now that you have the basics nailed down, it’s time to look at how to ingest large datasets all at once using the _bulk API.

The _bulk API operation allows you to execute many actions on one or more indexes in a single call. This has obvious advantages in performing several create, update, and delete actions in a single call and can save you some valuable time. Let’s look at an example of how this is done:

POST /_bulk
<action_meta>n
<action_data>n
<action_meta>n
<action_data>n

Each action takes two lines of JSON. First, you provide the action description or metadata. Then, on the next line, you have the data. Each part and action is separated by a newline (n). An action description for an insert might look as follows:

{ "create" : { "_index" : "mammals", "_type" : "_doc", "_id" : "7" } }

And the next line of data might look like this:

{ "name":"whale", "color":"grey", "classification":"marine" }

Both the meta and the data make up a single action in a bulk operation. You can send as many operations as you want in a single call, just like this:

POST /_bulk
{ "create" : { "_index" : "mammals", "_type" : "_doc", "_id" : "7" } }
{ "name":"whale", "color":"grey", "classification":"marine" }
{ "create" : { "_index" : "mammals", "_type" : "_doc", "_id" : "8" } }
{ "name":"dolphin", "color":"pearl", "classification":"marine" }
{ "create" : { "_index" : "mammals", "_type" : "_doc", "_id" : "9" } }
{ "name":"seal", "color":"grey", "classification":"marine" }
{ "create" : { "_index" : "mammals", "_type" : "_doc", "_id" : "10" } }
{ "name":"penguin", "color":"white/black", "classification":"marine" }
{ "create" : { "_index" : "mammals", "_type" : "_doc", "_id" : "11" } }
{ "name":"walrus", "color":"brown", "classification":"marine" }
{ "delete" : { "_index" : "mammals", "_type" : "_doc", "_id" : "1" } }

Hopefully, you have understood how to load/ingest data into Elasticsearch. Next, you are going to look at searching with Elasticsearch REST API.

How to Search with Elasticsearch REST API?

Searching is what Elasticsearch excels at. While having a huge volume of data is great, it isn’t useful unless you put it to good use. There’s no better way to do this than searching and querying for values. 

  • Are you looking for all the marine mammals?
  • Do you need a tally of all carnivores?
  • What about the number of errors logged per hour?

The answers to these questions all begin with an index search. Let’s look at some of the ways in which you can search the data:

1) Basic Search with Elasticsearch REST API

A basic search typically looks like this:

GET /mammals/_search?q=name:l*

This GET call will return a JSON response with the walrus document.

2) Advanced Search with Elasticsearch REST API

Advanced search can be done by providing the query options as JSON in the request body. Try this:

GET /mammals/_search
{
  "query": {
    "term": {
      "name": "walrus"
    }
  }
}

The result will be a JSON response with the walrus document.

You can do more with this type of query. For example, by searching with a Sort. To try Sorting, you first need to re-create the index. This is because the automatic field mapping selects types that are impossible to sort by default. Follow the following commands to delete and create the index:

DELETE /mammals
PUT /mammals
{ 
  "mappings": { 
    "_doc": { 
      "properties": { 
        "name": { 
          "type": "keyword" 
        }, 
        "color": { 
          "type": "keyword"
        },
        "classification": {
          "type": "keyword"
        }
      }
    }
  }
}

Then repopulate the index:

POST /_bulk
{ "create" : { "_index" : "mammals", "_type" : "_doc", "_id" : "7" } }
{ "name":"whale", "color":"grey", "classification":"marine" }
{ "create" : { "_index" : "mammals", "_type" : "_doc", "_id" : "8" } }
{ "name":"dolphin", "color":"pearl", "classification":"marine" }
{ "create" : { "_index" : "mammals", "_type" : "_doc", "_id" : "9" } }
{ "name":"seal", "color":"grey", "classification":"marine" }
{ "create" : { "_index" : "mammals", "_type" : "_doc", "_id" : "10" } }
{ "name":"penguin", "color":"white", "classification":"marine" }
{ "create" : { "_index" : "mammals", "_type" : "_doc", "_id" : "11" } }
{ "name":"walrus", "color":"brown", "classification":"marine" }
{ "delete" : { "_index" : "mammals", "_type" : "_doc", "_id" : "1" } }

Now you can search with a sort like this:

GET /mammals/_search
{
  "query" : {
    "term": { "color": "grey" }
  },
  "sort" : [
      "classification"
  ]
}

In this example, you did an ascending sort by the classification. Thus, you got a glimpse on how you can search data with Elasticsearch REST API.

Conclusion

Whether you’re running your own Elasticsearch deployment or using a managed cluster, it’s incredibly easy to learn how to use the Elasticsearch REST API to upload data and perform searches. Your ultimate goal should be to get data into Elasticsearch, where you can perform last-mile analysis to get some actionable insights. Kibana provides you with one of the best tools in the market for creating feature-rich dashboards and visualizations. Also read about Elasticsearch replication.

Visit our Website to Explore Hevo

Businesses can use automated platforms like Hevo Data to set the integration and handle the ETL process. It helps you directly transfer data from Elasticsearch, Data Warehouse, Business Intelligence tools, or any other desired destination in a fully automated and secure manner without having to write any code and will provide you with a hassle-free experience.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at the unbeatable pricing that will help you choose the right plan for your business needs.

Share your experience of learning about Elasticsearch REST API in the comments section below!

No-code Data Pipeline for your Data Warehouse