Terraform Lambda Function Deployment: 5 Easy Steps

By: Published: April 29, 2022

Terraform Lambda_FI

AWS Lambda users write functions, which are self-contained applications written in one of the supported languages and runtimes, and upload them to the service, which executes them rapidly and flexibly.

Lambda functions can be used to serve web pages, process data streams, call APIs, and communicate with other Amazon Web Services offerings, among other things.

In this article and code examples, you will see how to implement Terraform Lambda functions in simple steps and get a deep understanding of Terraform Lambda Resources.

Table of Contents

What is AWS Lambda?

Terraform Lambda: Lambda logo
Image Source

AWS Lambda is an event-driven serverless compute solution that allows you to run code for almost any application or backend service without having to provision or manage servers. More than 200 AWS Services and SaaS Apps can contact Lambda, and you only pay for what you use.

The phrase “serverless” computing refers to the fact that these functions do not need to be run on your servers. AWS Lambda is a completely managed service that takes care of all of your infrastructure needs.

Lambda functions can be used for a variety of tasks, including serving web pages, processing data streams, calling APIs, and interacting with other Amazon Web Services services.

It also works with other AWS services like API Gateway, DynamoDB, and RDS, and serves as the foundation for AWS Serverless solutions. Lambda works with a variety of popular languages and runtimes, making it an excellent choice for Serverless developers.

Key Features of Lambda

  • Code Signing: Code signing for Lambda provides trust and integrity controls, allowing you to ensure that your Lambda services only use unmodified code provided by authorized developers.
  • Lambda Extensions: You can use Lambda extensions to improve your Lambda functions. For example, you can utilize extensions to make it easier to integrate Lambda with your preferred monitoring, observability, security, and governance tools.
  • Function Blueprints: A function blueprint is a piece of code that explains how to use Lambda with other AWS services or third-party apps. The blueprints include sample code and function configuration options for the Node.js and Python runtimes.
  • Controls for Concurrency and Scaling: Concurrency and scaling controls, such as concurrency limits and provisioned concurrency, let you fine-tune your production applications’ scalability and reactivity.
  • Functions Defined as Container Images: You may utilize your chosen container image tooling, processes, and dependencies to design, test, and deploy your Lambda functions.

Set up Seamless Data Ingestion Pipelines from AWS Sources using Hevo

Hevo Data, a Fully-managed Automated Data Pipeline solution, can help you automate, simplify & enrich your data flow from various AWS sources such as AWS S3 and AWS Elasticsearch in a matter of minutes. Hevo’s end-to-end Data Management offers streamlined preparation of Data Pipelines for your AWS account. Additionally, Hevo completely automates the process of not only extracting data from AWS S3 and AWS Elasticsearch but also enriching the data and transforming it into an analysis-ready form without having to write a single line of code.

Get Started with Hevo for Free

With Hevo’s out-of-the-box connectors and blazing-fast Data Pipelines, you can extract & aggregate data from 100+ Data Sources (including 40+ Free Sources) including AWS S3 and AWS Elasticsearch straight into your Data Warehouse, Database, or any destination. To further streamline and prepare your data for analysis, you can process and enrich Raw Granular Data using Hevo’s robust & built-in Transformation Layer without writing a single line of code!

Experience an entirely automated hassle-free Data Pipeline from AWS Services using Hevo. Try our 14-day full access free trial today!

What is Terraform?

Terraform Lambda: Terraform logo
Image Source

Terraform by HashiCorp is an infrastructure as code solution that allows you to specify cloud and on-premise resources in human-readable configuration files that you can reuse and share. Then, throughout the lifecycle of your infrastructure, you can utilize a consistent methodology to provide and manage it. Terraform can manage both low-level and high-level components, such as Compute, Storage, and Networking Resources, as well as DNS records and SaaS capabilities.

Key Features of Terraform

  • Remote Execution: By default, Terraform Cloud uses disposable virtual machines in its own cloud infrastructure to run Terraform. Terraform Cloud Agents can be used to run Terraform on isolated, private, or on-premises infrastructure. “Remote operations” is a term used to describe the execution of Terraforms from afar.
  • Workspaces: Instead of directories, Terraform Cloud organizes infrastructure using workspaces. Each workspace contains everything Terraform needs to manage a specific set of infrastructure, and Terraform makes use of that content whenever it runs in that workspace.
  • Data Sharing: Your Terraform state is stored on Terraform Cloud, which operates as a remote backend. Workspaces are linked to state storage, which helps preserve the state connected to the configuration that created it.
  • Version Control Integration: Infrastructure-as-code, like other types of code, should be kept in version control, hence Terraform Cloud is built to connect with your VCS provider directly.
  • Command-Line Integration: Most Terraform users run terraform plans to interactively review their work while editing configurations; nonetheless, remote execution offers huge benefits to teams, but local execution offers major benefits to individual developers.
  • Private Registry: Terraform may get providers and modules from a variety of different places. With Terraform Cloud, finding providers and modules to utilize with a private registry is a lot easier. Users can search a directory of internal providers and modules and establish flexible version limits for the modules they use in their configurations. Easy versioning allows downstream teams to trust secret modules while allowing upstream teams to iterate more quickly.

Terraform Lambda Function Deployment

Terraform Lambda Function Deployment can be done in 5 easy steps:

Step 1: Creating an IAM User 

  • Navigate to Amazon Identity and Access Management (IAM) in the AWS Console.
  • Create an IAM user with Administrator Access on both the Amazon console and API by clicking Create IAM User.
  • Create a folder called Terraform-folder.aws in your AWS account.
  • Create a text file called credentials inside it, with the following contents:

Step 2: Creating an IAM Policy

Make a folder called Terraform-folderlambda-testiam. Using the AWS Policy Generator, build a policy file automatically:

  • Select IAM policy from the Type of Policy drop-down menu.
  • Select AWS CloudWatch Logs from the AWS Service menu.
  • Select All Actions from the drop-down menu.
  • As the Amazon resource name, type *.
  • Select Create Policy from the drop-down menu.

Create a file called lambda_policy.json using the JSON code. Add another file named lambda_assume_role_policy.json to the iam folder, containing the following code:

Terraform Lambda: Step 2
Image Source

What Makes Data Ingestion from AWS Sources using Hevo’s Data Pipeline Best-In-Class

Loading data from AWS Sources such as AWS S3 and AWS Elasticsearch can be a mammoth task if the right set of tools is not leveraged. Hevo’s No-Code Automated Data Pipeline empowers you with a fully-managed solution for all your data collection, processing, and loading needs. Hevo’s native integration with S3 and Elasticsearch empowers you to transform and load data straight to a Data Warehouse such as Redshift, Snowflake, BigQuery & more!

This data loading lets you effortlessly connect to 100+ Sources (including 40+ free sources) and leverage Hevo’s blazing-fast Data Pipelines to help you seamlessly extract, transform, and load data to your desired destination such as a Data Warehouse.

Our platform has the following in store for you:

  • Data Transformations: Best-in-class & Native Support for Complex Data Transformation at fingertips. Code & No-code Fexibilty designed for everyone.
  • Smooth Schema Mapping: Fully-managed Automated Schema Management for incoming data with the desired destination.
  • Quick Setup: Hevo with its automated features, can be set up in minimal time. Moreover, with its simple and interactive UI, it is extremely easy for new customers to work on and perform operations.
  • Transformations: Hevo provides preload transformations to make your incoming data from AWS S3 and AWS Elasticsearch fit for the chosen destination. You can also use drag and drop transformations like Date and Control Functions, JSON, and Event Manipulation to name a few.
  • Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.

Want to take Hevo for a spin? Sign up here for a 14-day free trial and experience the feature-rich Hevo.

Step 3: Creating Lambda Function and uploading it to S3

For your project, make a directory called Terraform-folderlambda-test. With the following code, create a simple Hello World Lambda function and name it hello.js.

Terraform Lambda: Step 3
Image Source

Create a new Amazon S3 bucket and then compress the Lambda function as a hello.zip and upload the hello.zip to the S3 bucket.

Step 4: Creating Terraform Resources

Create three .tf files in the lambda-test project folder to enable TerraForm to deploy Lambda functions:

  • iam-lambda.tf: this file specifies two TerraForm resources and assigns IAM policies to each of them.
  • provider.tf: identifies Amazon Web Services as a TerraForm provider.
  • lambda.tf: a TerraForm resource that defines the Lambda function.

For each of these files, the sample code is provided below.

Step 5: Deploying Lambda Function

To initialize TerraForm and download plugins, run the following command in the TerraForm-directory:


Then, to deploy all of the resources in the project folder, run this command:

terraform apply -auto-approve

Your Lambda function will be deployed to AWS automatically via TerraForm.

Go to the Amazon Lambda console, find the hello function, open it, and click Test to see if the Lambda function is indeed operating. Provide a test event and verify that your new Lambda logs the words “Hello World” in the log output.

You have successfully carried out Terraform Lambda Function Deployment.

Terraform Lambda Resources

1) aws_lambda_alias

A Virtual Identifier for a specific version of a Lambda function is created with the aws_lambda_alias resource. It allows you to upgrade the version of a function without having to ask the client to change their code. You can also use routing config to implement canary deployment by pointing the alias to several versions.

Here’s an example of Terraform AWS using the Lambda alias feature:

resource "aws_lambda_alias" "prod_lambda_alias" {
  name         	= "your_alias"
  description  	= "production version"
  function_name	= aws_lambda_function.lambda_function_prod.arn
  function_version = "1"
  routing_config {
    additional_version_weights = {
  	"2" = 0.4

2) aws_lambda_event_source_mapping

The aws_lambda_event_source_mapping resource establishes a mapping between a Lambda function and an event source. The ARN of the event source that triggers the Lambda function is configured. It also specifies the properties that will be used to regulate the function’s behavior.

A Terraform AWS example of a DynamoDB event source is shown below:

resource "aws_lambda_event_source_mapping" "DynamoDBExample" {
  event_source_arn  = aws_dynamodb_table.dynamodbexample.stream_arn
  function_name 	= aws_lambda_function.dynamodbexample.arn
  starting_position = "LATEST"

3) aws_lambda_function

To launch a Lambda function, you’ll need the code and an IAM role. As a deployment package, the code is uploaded to an S3 bucket (zip file). A sample Terraform AWS Lambda function is shown below:

resource "aws_iam_role" "iam_role_for_lambda" {
name = "iam_role_for_lambda"
assume_role_policy = <<EOF
"Version": "2012-10-17",
"Statement": [
 	"Action": "sts:AssumeRole",
 	"Principal": {
   	"Service": "lambda.amazonaws.com"
 	"Effect": "Allow",
 	"Sid": ""

resource "aws_lambda_function" "sample_lambda" {
filename  	= "lambda_function_payload.zip"
function_name = "lambda_terraform_function_name"
role      	= aws_iam_role.iam_role_for_lambda.arn
  handler   	= "data.test"

# The filebase64sha256() function is available in Terraform 0.11.12 and later
# For Terraform 0.11.11 and earlier, use the base64sha256() function and the file() function:
# source_code_hash = "${base64sha256(file("lambda_function_payload.zip"))}"
  source_code_hash = filebase64sha256("lambda_function_payload.zip")

  runtime = "nodejs12.x"

environment {
   variables = {
 	foo = "bar"


It also has properties like vpc_config, file_system_config, and others.

Other optional resources such as aws_efs_file_system, aws_efs_mount_target, and aws_efs_access_point may be required by the function.

4) aws_cloudwatch_log_group

This resource is required to create a log group and set a retention policy for a function. This is required for logging and monitoring Lambda functions.

resource "aws_cloudwatch_log_group" "example" {
  name          	= "/aws/lambda/${var.lambda_function_name}"
  retention_in_days = 14

5) aws_lambda_function_event_invoke_config

To invoke a Lambda function for asynchronous flow, you’ll need this resource.

resource "aws_lambda_function_event_invoke_config" "lambdaexample" {
  function_name = aws_lambda_alias.example.function_name
 maximum_event_age_in_seconds = 60
  maximum_retry_attempts   	= 0
  qualifier 	= aws_lambda_alias.example.name

  destination_config {
    on_failure {
  	destination = aws_sqs_queue.example.arn

    on_success {
  	destination = aws_sns_topic.example.arn

6) aws_lambda_layer_version

This gives you access to the Lambda Layer Version resource. Layers in Lambda allow you to reuse reusable code across different Lambda functions.

resource "aws_lambda_layer_version" "lambda_nodejs_layer" {
  filename   = "lambda_nodejs_layer_payload.zip"
  layer_name = "lambda_layer_nodejs"

  compatible_runtimes = ["nodejs12.0"]

7) aws_lambda_permission

Other AWS services, such as S3 and DynamoDB, can use this resource to access the Lambda function.

resource "aws_lambda_permission" "allow_s3" {
  statement_id  = "AllowExecutionFromS3"
  action    	= "lambda:InvokeFunction"
  function_name = aws_lambda_function.s3_lambda.function_name
  principal 	= "events.amazonaws.com"
  source_arn	= "arn:aws:events:ap-east-2:121112424343:rule/RunDaily"
  qualifier 	= aws_lambda_alias.s3_alias.name

8) aws_api_gateway

For synchronous flow, the API Gateway is the most critical resource. Terraform may be used to develop a Lambda function and integrate it with it.

The four resources needed to establish API Gateway and its interaction with a function are listed below:

resource "aws_api_gateway_rest_api" "APIexample" {
  name    	= "ServerlessAppExample"
  description = "Terraform Serverless Application Example"

resource "aws_api_gateway_resource" "apiproxy" {
   rest_api_id = aws_api_gateway_rest_api.example.id
   parent_id   = aws_api_gateway_rest_api.example.root_resource_id
   path_part   = "{proxy+}"

resource "aws_api_gateway_method" "methodproxy" {
   rest_api_id   = aws_api_gateway_rest_api.example.id
   resource_id   = aws_api_gateway_resource.proxy.id
   http_method   = "ANY"
   authorization = "NONE"

resource "aws_api_gateway_integration" "apilambda" {
   rest_api_id = aws_api_gateway_rest_api.example.id
   resource_id = aws_api_gateway_method.proxy.resource_id
   http_method = aws_api_gateway_method.proxy.http_method

   integration_http_method = "POST"
   type                	= "AWS_PROXY"
   uri                 	= aws_lambda_function.example.invoke_arn


In this article, you saw how to implement the Terraform Lambda function. You got a deep understanding of different Terraform Lambda Resources. In case you want to export data from a source of your choice into your desired Database/destination then Hevo Data is the right choice for you! 

Visit our Website to Explore Hevo

Hevo Data, a No-code Data Pipeline provides you with a consistent and reliable solution to manage data transfer between a variety of sources and a wide variety of Desired Destinations, with a few clicks. Hevo Data with its strong integration with 100+ sources (including 40+ free sources) allows you to not only export data from your desired data sources & load it to the destination of your choice, but also transform & enrich your data to make it analysis-ready so that you can focus on your key business needs and perform insightful analysis using BI tools.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at the unbeatable pricing that will help you choose the right plan for your business needs.

Share your experience of learning about the Terraform Lambda! Let us know in the comments section below about your doubts about Terraform Lambda!

Former Research Analyst, Hevo Data

Harsh comes with experience in performing research analysis who has a passion for data, software architecture, and writing technical content. He has written more than 100 articles on data integration and infrastructure.

No-code Data Pipeline For Your Data Warehouse