Organizations across all verticals rely on data to make informed business decisions. Drawing meaningful insights from the volumes of data allow businesses to gain a competitive edge over the others on the market and scale their growth. And that is why businesses often face the need to evaluate and optimize their data storage and analytics solutions. One such scenario arises when organizations consider migrating their data from Amazon S3 to Firebolt for faster analysis and querying.

Amazon S3 serves as an effective choice for organizations with cloud storage needs, owing to its scalability, cost-effectiveness and durability. Firebolt, on the other hand, uses novel cloud infrastructure capabilities to provide incredibly fast query response times. Therefore, Amazon S3 to Firebolt integration can help a business address the challenges of modern data analytics stack and grow in leaps and bounds.

In this blog post, we’re going to walk you through 3 simple steps of Amazon S3 to Firebolt migration, so that you can choose the one that fits your business needs best.

Method 1: Using the Insert Into Command

Before you use the Insert Into command to load data from S3 to Firebolt, there are a steps that you need to follow:

Step 1: Create an IAM role in AWS so that Firebolt can access S3

Amazon S3 to Firebolt: Creating IAM Role in AWS
Image Source
  • Head to the IAM console in the AWS Management Console and then hit on Roles in the left-hand navigation panel.
  • Next, click on Create Role and select AWS Service as the type of role.
  • In the next step, you’ll need to select Firebolt as the service that would assume the role.
  • Click on Next: Permissions and select the following permissions
  • AmazonS3FullAccess
  • AmazonS3ReadAccess
  • Click on Next:Tags
  • Click on Next:Review
  • Finally, review the configuration and hit Create Role.

Step 2: Create an external table in Firebolt pointing to the S3 bucket

  • For this, first you’ll need to connect to your Firebolt database.
  • And then run this command to create the table.
CREATE EXTERNAL TABLE my_table (
  column1 VARCHAR(255),
  column2 INT
)
CREDENTIALS 'aws_role_arn=arn:aws:iam::<account_id>:role/<role_name>'
LOCATION 's3://<bucket_name>/<prefix>';

Step 3: Finally, use the Insert Into command to load data from S3 to the external table

Amazon S3 to Firebolt: Insert Into Command for a Sample Table
Image Source
  • Execute this command to load data from S3 to the external table.
INSERT INTO my_table
SELECT *
FROM s3://<bucket_name>/<prefix>;

And that’s it. Your data will be migrated from S3 to Firebolt.

Method 2: Using the Firebolt API

Step 1: Before creating a Firebolt API, you’ll first need to create an IAM role in AWS, the steps of which are the same as mentioned in the previous method.

Step 2: Create an API for Firebolt

  • Log in to your Firebolt account and navigate to the Settings page.
  • Next, click on the Create API Key button under the API Keys section.

Step 3: Upload the data from S3 by creating a Firebolt API request

  • The API request that you make to upload the data from S3 is a POST request to the api/v1/import endpoint. 
  • The request should contain the following JSON data-
{
  "source": "s3://<bucket_name>/<prefix>",
  "destination": "my_table",
  "api_key": "<api_key>"
}

Step 4: Finally, execute the Firebolt API request

  • Execute the request using a REST client such as “curl” or “Postman”.
  • Run this sample command to execute the request using “curl”.
curl -X POST \
  -H "Content-Type: application/json" \
  -d '{
    "source": "s3://<bucket_name>/<prefix>",
    "destination": "my_table",
    "api_key": "<api_key>"
  }' \
  https://api.firebolt.com/v1/import

The data from the S3 bucket will now be successfully migrated to Firebolt.

These manual methods are time consuming and require a fair knowledge of S3 and coding. But if you want a simpler way to migrate data that doesn’t require you to code, we have a better method for you. 

Method 3: Using an Automated Data Pipeline

Using an automated and fully-managed data pipeline like Hevo Data can help you migrate data from S3 to Firebolt seamlessly, without the hassle of coding. 

The benefits of using this method are:

  • Efficient and Time Saving: With an automated data pipeline, like Hevo, you can migrate data automatically from Amazon S3 to Firebolt without wasting time or writing a single line of code, as opposed to the manual methods.
  • Repeated and Reliable: Once the pipeline is set up, you can easily use it for future data replication and migration processes as many times as you want. The repeatable framework of automated data pipelines allow organizations to make incremental updates without starting from scratch.
  • Automated Schema Management: Fully-managed and automated data pipelines use the auto schema mapping feature to map all your incoming schemas automatically and without any errors.
  • Scalable Infrastructure: Automated data pipelines use their cloud computing resources to handle large volumes of data and process the same within reasonable timeframes.
  • Seamless Data Transformation: Third-party data pipelines, like Hevo, offer a drag-and-drop console for easy transformations. However, there is also a python console for those who want to perform complex transformations.

Hevo Data provides a no-code, fully-managed and automated pipeline that helps you leverage all these benefits for an S3 to Firebolt data migration.

Step 1: Configure Amazon S3 as Source

Amazon S3 to Firebolt: Configure Source
Image Source

Step 2: Configure Firebolt as Destination

Amazon S3 to Firebolt: Configure Destination
Image Source

That’s it. Your pipeline will be set up in just a few minutes and start migrating data from S3 to Firebolt without a hassle. 

What You Can Achieve by Migrating Data from Amazon S3 to Firebolt

  • Migrating data from S3 to Firebolt allows organizations to derive faster insights as Firebolt can analyze vast datasets with sub-second response time.
  • Migrating data to Firebolt helps businesses to seamlessly scale storage and compute resources owing to Firebolt’s elastic scalability.
  • Amazon S3 is a cost-effective storage solution, while Firebolt offers unique indexing and smart data compression capabilities. The integration of both can help organizations optimize their costs while also making the most of their data.

Conclusion

In a nutshell, the Amazon S3 Firebolt integration can unlock the full potential of data analytics in an organization. By leveraging Firebolt’s high-performance analytics capabilities, scalability, and simplified data management, businesses can accelerate their analytical workflows, uncover valuable insights, and gain a competitive edge in today’s data-centric landscape.

From the discussion that followed, we learnt that there are multiple methods of Amazon S3 Firebolt migration. However, the method that suits you the best depends on your use case, engineering bandwidth and budget. With Hevo, you can automate the entire process and enjoy its 150+ plug-and-play integrations (including 50+ free sources), like S3 and Firebolt. Check out our transparent and unbeatable pricing to make an informed decision.

Saving countless hours of manual data cleaning and standardizing, Hevo Data’s pre-load data transformations to connect Amazon S3 to Firebolt gets it done in minutes via a simple drag and drop interface or your custom Python scripts. No need to go to Firebolt for post-load transformations. You can simply run complex SQL transformations from the comfort of Hevo Data’s interface and get your data in the final analysis-ready form.

mm
Former Content Marketing Specialist, Hevo Data

Anwesha is experienced in curating content and executing content marketing strategies through a data-driven approach. She has more than 5 years of experience in writing about ML, AL, and data science.

All your customer data in one place.