How to Implement Real-Time Streaming Data to S3? | 2 Easy Methods

By: Published: January 26, 2021

Feature Image - Streaming Data to S3

Do you want to stream your data to Amazon S3? Are you finding it challenging to load your data into your Amazon S3 buckets? If yes, then you’ve landed at the right place! This article will answer all your queries & relieve you of the stress of finding a truly efficient solution. Follow our easy step-by-step guide to help you master the skill of Streaming Data to S3 seamlessly to bring in your data from a source of your choice in real-time!

Table of Contents

Introduction to Amazon S3

Amazon S3 Logo - Streaming Data to S3
Image Source: www.cloudsavvyit.com

Amazon S3 is a highly scalable, reliable, fast, and inexpensive data storage infrastructure on the cloud, also called “storage for the Internet“. It can be used to store and retrieve any amount of data at any time, from anywhere on the web.

Some important properties of Amazon S3 are:

  • The basic storage structure in S3 is a bucket. 
  • An S3 instance can have many buckets. 
  • The atomic data storage unit is called a file/object. 
  • An object is a file and any optional metadata that describes the file.
  • An S3 bucket can store many files and can reside in a desired geographical location. 
  • The creator of the bucket can give permission to others to create, delete, and list objects in the bucket. 

Understanding Streaming Data

A Streaming Data source is the one that continuously generates data, at varying speeds. 

Some examples of Streaming Data are as follows:

  • A log generator. 
  • Customer interaction data from a web application or a mobile application. 
  • Stock market data. 
  • Data from scientific sensors [IOT device data etc.]
Replicate Data in Minutes Using Hevo’s No-Code Data Pipeline

Hevo Data, a Fully-managed Data Pipeline platform, can help you automate, simplify & enrich your data replication process in a few clicks. With Hevo’s wide variety of connectors and blazing-fast Data Pipelines, you can extract & load data from 100+ Data Sources (including Amazon S3) straight into your Data Warehouse or any Databases.

To further streamline and prepare your data for analysis and enrich raw granular data using Hevo’s robust & built-in Transformation Layer without writing a single line of code!

GET STARTED WITH HEVO FOR FREE

Hevo is the fastest, easiest, and most reliable data replication platform that will save your engineering bandwidth and time multifold. Try our 14-day full access free trial today to experience an entirely automated hassle-free Data Replication!

Understanding the Need to Preplan the Data Ingestion

The ingestion of data into S3 requires significant deliberation as the incoming data can be in many formats, arrive at different speeds, and have diverse pre-processing requirements. 

If your data is not on the AWS infrastructure, you may want to use non-AWS solutions/tools to move your data into S3.

Method 1: Streaming Data to S3 using Apache Flume

Understanding Apache Flume

Apache Flume Logo - Streaming Data to S3
Image Source: www.en.wikipedia.org

Flume is an open-source, distributed and reliable offering from Apache, using it does not bind you to any infrastructure or technology stack. Using Flume, you can seamlessly transport massive quantities of your data from many different sources, to a centralized data store. The data sources are customizable and can be network traffic data, event logs, social-media-generated data, or almost any kind of data.

We will use two abstractions frequently in this discussion:

  • Flume Event: An atomic unit of data flow having a byte payload and some metadata (an optional set of string attributes).
  • Flume Agent: A java process that enables the events to flow from an external source to the next destination (hop). 

An event may hop through a few destinations before arriving to rest at the final warehouse destination(sink).

Procedure to Implement Streaming Data to S3

The “Agent” is the engine that will drive our data flow from source to sink(destination, S3 in our case). Using Flume, we have to first list our data sources+channels+sink for the given agent and then point the source and sink to a channel. 

There can be multiple data sources and channels, but only a single sink. 

The template format (stored in weblog.config file) for specifying these is:

#streaming data to S3 
# list the sources, sinks and channels for the agent
<Agent>.sources = <Source>
<Agent>.sinks = <Sink>
<Agent>.channels = <Channel1> <Channel2>
	
# set channel for source
<Agent>.sources.<Source>.channels = <Channel1> <Channel2> ...
	
# set channel for sink
<Agent>.sinks.<Sink>.channel = <Channel1>

You can set the properties of each source, sink, and channel; in the following format:

#streaming data to S3 
# properties for sources
<Agent>.sources.<Source>.<someProperty> = <someValue>

# properties for channels
<Agent>.channel.<Channel>.<someProperty> = <someValue>

# properties for sinks
<Agent>.sources.<Sink>.<someProperty> = <someValue>

Since S3 is built over HDFS[Hadoop Distributed File System], we can use the HDFS-sink in Flume. The HDFS sink writes events(data) into the Hadoop Distributed File System. 

It can bucket/partition the data based on attributes like timestamps/machine of origin/type etc. Compression is supported, as are text and sequence files.

More information about the HDFS can be found here.

Some configurable parameters include:

  • hdfs.path: HDFS directory path (eg hdfs://YourNamedNode/flume/stockdata/ OR s3n://bucketName). 
  • hdfs.filePrefix: Name prefixed to files created by Flume in hdfs directory.
  • hdfs.fileSuffix: Suffix to append to file (eg .avro OR .json).
  • hdfs.rollSize: File size to trigger roll, in bytes (0: never roll based on file size).
  • hdfs.rollCount: Number of events written to file before it rolled (0 = never roll based on number of events)retryInterval – Time in seconds between consecutive attempts to close a file.

A typical S3 sink configuration can be like this:

#streaming data to S3 
agent.sinks.s3hdfs.type = hdfs
agent.sinks.s3hdfs.hdfs.path = s3n://<AWS.ACCESS.KEY>:<AWS.SECRET.KEY>@<bucket.name>/prefix/
agent.sinks.s3hdfs.hdfs.fileType = DataStream
agent.sinks.s3hdfs.hdfs.filePrefix = events-
agent.sinks.s3hdfs.hdfs.writeFormat = Text
agent.sinks.s3hdfs.hdfs.rollCount = 0
agent.sinks.s3hdfs.hdfs.rollSize = 33554432  
agent.sinks.s3hdfs.hdfs.batchSize = 1000
agent.sinks.s3hdfs.hdfs.rollInterval = 0

Though we have discussed a single flow from a source to S3 sink, a single Flume agent can contain several independent flows. You can list multiple sources, sinks, and channels in a single config.  

Flume allows you to define multiple hops, where the flow starts from the first source, and then the receiver sink can forward it to another agent, and so on. Flume also allows you to fan outflows, data from one source can be sent to multiple channels. For security, SSL/TLS support is inbuilt in Flume for many components.

More information on the working of Apache Flume can be found here.

Limitations of using Apache Flume while Streaming Data to S3

  • You need to configure the agent to correctly identify/load all the required data, define the hops(connections) correctly to set up the data flow, and configure the sink to receive them. 
  • Any pre-processing required must be explicitly programmed or configured. 
  • You are required to know the internals of S3, HDFS, JVM, etc. 
What Makes Hevo’s ETL Process Best-In-Class

Providing a high-quality ETL solution can be a difficult task if you have a large volume of data. Hevo’s automated, No-code platform empowers you with everything you need to have for a smooth data replication experience.

Check out what makes Hevo amazing:

  • Fully Managed: Hevo requires no management and maintenance as it is a fully automated platform.
  • Data Transformation: Hevo provides a simple interface to perfect, modify, and enrich the data you want to transfer.
  • Faster Insight Generation: Hevo offers near real-time data replication so you have access to real-time insight generation and faster decision making. 
  • Schema Management: Hevo can automatically detect the schema of the incoming data and map it to the destination schema.
  • Scalable Infrastructure: Hevo has in-built integrations for 100+ sources (with 40+ free sources) that can help you scale your data infrastructure as required.
  • Live Support: Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
Sign up here for a 14-day free trial!

Method 2: Streaming Data to S3 using Amazon Kinesis

Understanding Amazon Kinesis

This image has an empty alt attribute; its file name is 0*UQBKjEff1uIsXH8W
Image Source

Amazon Kinesis enables real-time processing of large data. It’s built for real-time applications and allows developers to pull data from multiple sources while scaling up and down on EC2 instances.

It captures, stores, and processes data from big, dispersed sources like event logs and social media feeds. So to sum up, Amazon  Kinesis delivers data to various consumers at the same time once it has been processed.

Amazon Kinesis Firehose delivery streams can be built via the console or the AWS SDK. We will use the ole to construct the distribution stream for our blog post. We can edit and modify the delivery stream at any time once it is generated.

Procedure to Implement Streaming Data to S3

Step 1: Access Kinesis Service

First, navigate to the Kinesis service, which is located in the Analytics category. If you have never used Kinesis previously, you will be presented with the following welcome page.

Image Source

To begin creating our delivery stream, click on Get Started.

Step 2: Configure the Delivery Stream

Enter a name for the Delivery stream. Select Direct PUT or other sources provided under the Source section. This option creates a delivery stream to which producer programs can write directly.

If you choose the Kinesis stream, the delivery stream will use a Kinesis data stream as a data source. You should choose the first alternative to simplify the process of streaming data to S3.

Image Source

Step 3: Prerequisities for Lambda Function

Kinesis Firehose can use Lambda functions to alter incoming source data and distribute it to destinations. AWS provides blueprints for Lambda functions. But, before we create a Lambda function, let’s go over the prerequisites of data transformation for streaming data to S3.

  • recordid: The ID of the record delivered from Kinesis Firehose to Lambda during the invocation.
  • result: The state of the data modified by the Lambda function.
  • data: The data that has been altered.

There are numerous Lambda blueprints available for us to employ while developing our Lambda function for data transformation. One of these blueprints will be used to develop our Lambda function.

Image Source

This prompts you to select a Lambda function. Choose the Create new option. The Lambda designs for data transformation are supplied here. As our blueprint, you’ll use General Firehose Processing.

Please give the function a name. Then we can give an IAM role with access to the Firehose delivery stream and authority to invoke the PutRecordBatch action.

Image Source

Step 4: Configure the Destination

Following the creation of the IAM role, you will be forwarded to the Lambda function creation page. Select the newly formed role and then in order to alter our data records develop your own Lambda function code. The Lambda blueprint has already filled in the code with the predetermined rules that you must follow.

Our streaming data will be in the following format.

#streaming data to S3 
{"TICKER_SYMBOL":"JIB","SECTOR":"AUTOMOBILE","CHANGE":-0.15,"PRICE":44.89}

On the following page, you will be asked to choose a location to save your records to S3.

Image Source

After evaluating the configurations, construct an Amazon Kinesis Firehose delivery stream by clicking Create Delivery Stream. The new Kinesis Firehose delivery stream will be accessible in the Creating State Section. After changing the delivery stream’s state to Active, you can begin transmitting data.

Image Source

After changing the delivery stream’s state to Active, you can begin transmitting data to S3 from a producer.

Image Source

You can go to the destination S3 bucket and verify that the streaming data was saved there. Check to see if the streaming data has the Change attribute as well. The backup S3 bucket will contain all of the streaming records prior to transformation.

And that’s it! You have now successfully established and tested a delivery system for streaming data to S3 using Amazon Kinesis Firehose.

Conclusion

This article helped you learn the procedure to set up your Streaming Data to S3. It provides in-depth knowledge about the concepts behind every step to help you understand and implement them efficiently. Post this process the need might arise to push data stored in Amazon S3 to the data warehouse of your choice to run custom queries for analytics reports and live dashboard creation. That’s where Hevo Data can help.

visit our website to explore hevo

Hevo Data, a No-code Data Pipeline, helps you transfer data from a source of your choice in a fully-automated and secure manner without having to write the code repeatedly. Hevo, with its strong integration with 100+ sources & BI tools, allows you to not only export & load data but also transform & enrich your data & make it analysis-ready in a jiff.

Give Hevo Data a try and sign up for a 14-day free trial today. Hevo offers plans & pricing for different use cases and business needs, check them out!

Pratik Dwivedi
Freelance Technical Content Writer, Hevo Data

Pratik writes about various topics related to data industry who loves creating engaging content on topics like data analytics, machine learning, AI, big data, and business intelligence.

No-code Data Pipeline For Amazon S3