Organizations today process and transform a large amount of data with ETL (extract, load, and transform) pipelines. But, loading and transforming this big data is time-consuming. However, sometimes, you do not need to process vast amounts of data for your smaller projects.
Instead, you can use micro ETL with the help of AWS Lambda to get relevant data immediately. With AWS Lambda functions you can trigger time-based events to extract, transform, and save the data into a central repository.
In this article, you will learn to create a Micro ETL Data Pipeline Lambda Functions. Data Pipeline and AWS Lambda are also discussed briefly here.
Prerequisites
- Basics understanding of the need for data migration
- An active AWS account with sufficient permissions
- Error handling and monitoring mechanisms
What is Data Pipeline?
Data Pipeline is a series of steps implemented in a specific order to process and transfer data from one system to another. The first step in the Data Pipeline is to extract data from the source as input. In data pipelining, each step’s output serves as the next input.
The Data Pipeline process consists of three main elements – data source, processing steps, and final destination. Data Pipeline allows users to transfer data from source to destination with some modifications along with the data flow.
A Data Pipeline is an umbrella term for data movement from one place to another, including ETL and ELT processes. However, it is essential to observe that the Data Pipeline doesn’t necessarily mean that a transformation is carried out on the data.
Hevo Data, a Fully-managed Data Pipeline platform, can help you automate, simplify & enrich your data replication process in a few clicks. With Hevo’s wide variety of connectors and blazing-fast Data Pipelines, you can extract & load data from 150+ Data Sources straight into a Data Warehouse like AWS Redshift.
Let’s see some unbeatable features of Hevo Data:
- Live Support: With 24/5 support, Hevo provides customer-centric solutions to the business use case.
- Fully Managed: Hevo Data is a fully managed service and is straightforward to set up.
- Schema Management: Hevo Data automatically maps the source schema to perform analysis without worrying about the changing schema.
- Real-Time: Hevo Data works on the batch as well as real-time data transfer so that your data is analysis-ready always.
Get Started with Hevo for Free
What is AWS Lambda?
Developed in 2014, AWS Lambda is a serverless computing service that allows you to run code for any application or backend service without managing servers. AWS Lambda manages all the administrative tasks such as CPU utility, memory, resources, and more on its own. It can connect with more than 200 AWS services and SaaS applications.
AWS Lambda users write functions, which are self-contained applications written in one of the supported languages and runtimes, and upload them to AWS Lambda, which then executes them quickly and flexibly.
Lambda Functions can be used to do anything from serving web pages to processing data streams to calling APIs and integrating with other AWS services.
Getting Started with Data Pipeline Using Lambda
If you are building a data lake, an analytics pipeline, or a simple data feed, you will need small amounts of data that should be processed and refreshed. In this article, you can build and deploy a micro extract, transform and load (ETL) pipeline to handle this requirement. You will also configure a reusable Python environment to build and deploy micro ETL pipelines using your data source.
Micro ETL processes work seamlessly with the serverless architecture. Therefore, we will use the AWS Serverless Application Model (SAM) in this article.
You need a local environment to inspect the data, experiment and deploy the ETL process with the AWS SAM CLI (Command Line Interface). The deployment consists of a time-based event, which triggers the AWS Lambda function. This function is used for collecting, transforming, and storing in an Amazon S3 bucket, as shown in the below diagram.
Prerequisites
Step 1: Download the Code
Download the code from Github with the below command.
git clone https://github.com/aws-samples/micro-etl-pipeline.git
Step 2: Setup the Environment
The Github code comes with a preconfigured Conda environment. Therefore, you do not need to waste time installing the dependencies. The Conda environment is a directory containing a specific collection of Conda packages. You can use the environment.yml file to get the same dependencies.
- Create the environment using the below code.
conda env create -f environment.yml
- Activate the environment.
conda activate aws-micro-etl
Step 3: Analyze Data with Jupyter Notebook
- In this article, you will run the Jupyter notebook locally.
- After activating the environment, you can launch your Jupyter notebook.
- Use the AWS CLI with the below command.
jupyter notebook
- It will open a browser window with a Jupyter dashboard into the root project folder.
- Select the aws_mini_etl_sample.ipynb file.
- The above Jupyter notebook consists of a sample micro ETL process. The ETL process leverages the publicly available data from the HM land registry, containing the average price by property type series.
- The Jupyter consists of functional scenarios, as stated in the following.
- The possibility to support partial requests and therefore fetch a small part of the larger file.
- The ability to inspect, manipulate data, and achieve the right outcome.
- Supported file types other than CSV.
- The easiest way is to save a CSV file directly into the S3 bucket.
Step 4: Inspect the Function
- The downloaded code consists of an additional folder called micro-etl-app, containing ETL processes defined with the AWS SAM template, ready to deploy as a Lambda function.
- AWS SAM provides the syntax for expressing functions, APIs, databases, and event source mappings.
- Define the application and model it by using YAML with a few lines per resource. AWS SAM transforms and expands the AWS SAM syntax into the AWS CloudFormation syntax, enabling you to build serverless applications faster.
- The AWS SAM app consists of the below files.
- template.yml: It consists of the configuration to build and deploy the Lambda function.
- app/app.py: It consists of the application’s code from the Jupyter notebook.
- app/requirements.txt: It consists of the Python libraries needed for the Lambda function to run.
- The file template.yml consists of the details for deploying and building the ETL process, such as permissions, schedule rules, variables, and more.
- It becomes essential for this type of micro-application to allocate the right amount of memory and timeout, which is beneficial to avoid latency issues or resource restrictions. Under the Globals statement, memory and timeout setting for the Lambda function are defined, as shown below.
Globals:
Function:
Timeout: 20
MemorySize: 256
- Other necessary settings are defined inside the Property statement, such as the environment variables, which allow you to control settings like the URL to fetch without redeploying the code.
Environment:
Variables:
Url: 'http://publicdata.landregistry.gov.uk/market
S3Bucket: !Ref Bucket
LogLevel: INFO
Filename: 'ava-price-property-uk.csy'
- The definition of a cron event is under the Events statement, triggering the Lambda function every day at 8. am.
Events:
UpdateEvent:
Type: Schedule
Properties:
Schedule: cron(0 8 * * ? *)
- The initial section of the app.py file contains required dependencies, environment variables, and other supporting statements. The main code is inside the Lambda handler.
# Imports
import ...
import ...
import ...
# Environment variables
ONE = ...
TWO = ...
THREE = ...
# Lambda function handler
def lambda_handler(event, context):
# Code
#
- The app.py consists of comment that explains each statement. The first statement in the app.py file uses the requests library to fetch the last 2,000,000 bytes of your data source file defined in the URL environment variable.
res = requests.get(URL, headers=range_header(-2000000), allow_redirects=True)
- With the skiprows parameter, the second statement creates a pandas DataFrame directly from the source stream, removing the first row. It removes the first row because it is difficult to precisely fetch the beginning of a row using byte-range. The statement then assigns predefined column headers that are missing as part of the initial chunk of the file.
df = pd.read_csv(io.StringIO(res.content.decode('utf-8')), engine='python', error_bad_lines=False, names=columns, skiprows=1)
- The last file in the application is requirements.txt, which AWS SAM CLI uses to build and package the dependencies needed for the Lambda function to work correctly. You can also use additional package libraries in your application but define those in the requirements.txt.
Step 5: Build and Deploy ETL
To build and deploy the ETL process, follow the below steps.
- Step 1: Go to the micro-etl-app from the command line.
- Step 2: Run sam build for letting the AWS SAM CLI process the template file and bundle the application code on any functional dependencies.
- Step 3: Run the ‘sam deploy –stack-name my-micro-etl –guided’ command for deploying the process and saving parameters for future deploys.
- Step 4: Invoke the Lambda function and inspect the log simultaneously from the command line by using the below command.
aws lambda invoke --function-name FUNCTION_ARN out --log-type Tail --query 'LogResult' --output text | base64 -d
- Step 5: The base64 utility is accessible only on Linux and Ubuntu. For mac-OS, you can use the base64 -D command.
- Step 6: You can invoke the Lambda function on the Lambda console and inspect the CloudWatch log group with it, named /aws/lambda/<function name>.
- Step 7: The URL for the generated file in the S3 bucket is shown on the log’s final line. It should look like the below output.
Bash
## FILE PATH
53://micro-etl-bucket-XXXXXXXX/avg-price-property-uk.csv
- Step 8: Users can use AWS SAM CLI to inspect the content of the file and verify that it contains only rows from the range defined in app.py:
aws s3 cps3://micro-etl-bucket-xxxxxxxx/avg-price-property-uk.csv local_file.csv
Step 6: Clean Up Resources
- You can delete the resources from the command line to avoid future charges.
aws cloudformation delete-stack --stack-name my-micro-etl
The above command removes all the resources used in this article, including the S3 bucket.
- Deactivate the Conda environment variable using the below command.
conda deactivate
Integrate your data in minutes!
Conclusion
In this article, you learned to create a micro ETL pipeline for data refresh. Organizations use ETL pipelines to fetch data from a particular source, transform it into a specific format, and then store that data in a warehouse. With AWS, ETL micro-processes are easy to build and deploy a cost-effective mechanism for regularly transferring and managing a small amount of data.
Hevo Data, a No-code Data Pipeline provides you with a consistent and reliable solution to manage data transfer between a variety of sources and a wide variety of Desired Destinations with a few clicks. Hevo Data with its strong integration with 150+ sources (including 60+ free sources) allows you to export data from your desired data sources. Try a 14-day free trial and experience the feature-rich Hevo suite firsthand. Also, check out our unbeatable pricing to choose the best plan for your organization.
Frequently Asked Questions
1. What is Lambda in AWS for dummies?
AWS Lambda is a free service, like a serverless compute, where one can run their code in response to events instead of managing servers and then get charged only for the compute time one’s code runs.
2. Is Lambda a DevOps tool?
Yes, AWS Lambda can be regarded as a DevOps tool because it automates infrastructure tasks by executing code based on events, and therefore, AWS Lambda is used to achieve CI/CD, monitoring, and management of infrastructure without the management of servers.
3. What is the disadvantage of Lambda?
The limitation is the short duration of the execution of AWS Lambda, which is up to 15 minutes; for a long-running task, this will be somewhat constraining. It is also very vulnerable to cold start latency, particularly for less frequently used functions.
Skand is a dedicated Customer Experience Engineer at Hevo Data, specializing in MySQL, Postgres, and REST APIs. With three years of experience, he efficiently troubleshoots customer issues, contributes to the knowledge base and SOPs, and assists customers in achieving their use cases through Hevo's platform.