AWS Lambda is a service that many people are unfamiliar with. As a result, you might be curious about how it works and how you’d use it to create a highly scalable event-driven application. As someone who is probably no stranger to the internet, you’ve probably come across the terms serverless, function-as-a-service, or AWS Lambda a few times. Perhaps you’d want to know more. Lambda functions can be used to serve web pages, process data streams, call APIs, and communicate with other Amazon Web Services offerings, among other things.

In this post, you’ll learn about AWS Lambda functions and how to use NodeJS Lambda to create a simple application.

What are AWS Services?

Amazon Web Services, Inc. is an Amazon company that offers metered pay-as-you-go cloud computing platforms and APIs to consumers, businesses, and governments. AWS server farms provide distributed computing processing capability and software tools through these cloud computing web services.

AWS, or Amazon Web Service, is a Cloud computing platform leader. AWS delivers a “highly reliable, scalable, low-cost infrastructure platform that supports hundreds of thousands of enterprises in 190 countries across the world,” according to their website. According to Canalys’ 2018 research, AWS has a market share of 32.6 percent. That’s more than any other service provider has. 

What are AWS Lambda Functions?

As mentioned earlier, Lambda is an AWS computing service. It enables you to run code without having to deal with Cloud Servers. An event triggers the execution of an AWS Lambda function. The Lambda function has only one purpose. It may be as simple as fetching a blog post, writing one, or sending an email. On AWS, there are three ways to construct a Lambda function:

  • The AWS console could be used. AWS provides a Web Interface for managing and accessing its services. However, this way is not encouraged because writing a full-fledged program from the console is really tough.
  • You could use the AWS-provided Cloud-based IDE. You may use your browser to develop, run, and debug your code.
  • You could use your preferred editor in your local development environment and deploy it to live with a single command. In this blog, you’ll look at the third option.

1) Lifecycle Events of AWS Lambda Functions

The resources indicated in the functions’ configuration are delivered to this environment (like memory size). When you call a Lambda function for the first time, and subsequent calls to the same Lambda function, AWS Lambda handles it differently.

Calling a new Lambda function for the first time

AWS Lambda functions run in a separate environment, similar to containers. A new container will be built for your AWS Lambda function when you deploy it (or update an existing one). The initialization code will run before the first request comes at the exposed handler method, and your code will be put into the container.

The Lambda function can finish in one of several ways:

  • Timeout:  the user-specified timeout has been reached (currently set to 5 seconds),
  • Controlled Termination:  the handler function’s callback has been called,
  • Default Termination: If all callbacks have completed execution (even if the handler function’s callback has not been triggered), 
  • Crash: the process will crash.

Consequent calls to an existing Lambda function

AWS Lambda may opt to generate additional containers to serve your requests for Subsequent Calls. In this instance, the Initialization Process will proceed as mentioned above.

If your AWS Lambda function has not changed and only a short time has passed since the last call, Lambda may reuse the container. This cuts down on the time it takes to start up the new container and your code inside it.

2) NodeJS Lambda Handler

The NodeJS Lambda function handler is the event-processing mechanism in your function code. NodeJS Lambda executes the handler method when your function is called. The handler becomes available to handle another event when it departs or delivers a response. The following example function records the event object’s data and returns the logs’ location.

exports.handler =  async function(event, context) {
  console.log("EVENT: n" + JSON.stringify(event, null, 2))
  return context.logStreamName
}

Non-Async Handlers

The sample function below verifies a URL and returns to the invoker the status code.

const https = require('https')
let url = "https://docs.aws.amazon.com/lambda/latest/dg/welcome.html"

exports.handler =  function(event, context, callback) {
  https.get(url, (res) => {
    callback(null, res.statusCode)
  }).on('error', (e) => {
    callback(Error(e))
  })
}

Function execution for Non-Async Handlers continues until the event loop is empty or the function times out. The invoker does not receive the response until all event loop tasks have been completed. If the function runs out of time, an error is returned. By establishing context, you can tell the runtime to transmit the response right away and set callbackWaitsForEmptyEventLoop to false.

The response from Amazon S3 is returned to the invoker as soon as it is available in the following example. The event loop’s timeout is frozen, and it will continue to run the next time the function is called.

Example index.js file: HTTP request with callback

const AWS = require('aws-sdk')
const s3 = new AWS.S3()

exports.handler = function(event, context, callback) {
  context.callbackWaitsForEmptyEventLoop = false
  s3.listBuckets(null, callback)
  setTimeout(function () {
    console.log('Timeout complete.')
  }, 5000)
}

Async Handlers

Return and toss can be used to provide a response or an error to async handlers. To use these methods to return a response or an error, functions must employ the async keyword.

Return a promise if your code performs an asynchronous task to ensure that it completes. NodeJS Lambda communicates the response or error to the invoker when you resolve or reject the promise.

Example index.js file: HTTP request with async handler and promises

const https = require('https')
let url = "https://docs.aws.amazon.com/lambda/latest/dg/welcome.html"   

exports.handler = async function(event) {
  const promise = new Promise(function(resolve, reject) {
    https.get(url, (res) => {
        resolve(res.statusCode)
      }).on('error', (e) => {
        reject(Error(e))
      })
    })
  return promise
}
Reliably Integrate data with Hevo’s Fully Automated No Code Data Pipeline

If yours anything like the 1000+ data-driven companies that use Hevo, more than 70% of the business apps you use are SaaS applications. Integrating the data from these sources in a timely way is crucial to fuel analytics and the decisions that are taken from it. But given how fast API endpoints etc can change, creating and managing these pipelines can be a soul-sucking exercise.

Hevo’s no-code data pipeline platform lets you connect over 150+ sources in a matter of minutes to deliver data in near real-time to your warehouse. What’s more, the in-built transformation capabilities and the intuitive UI means even non-engineers can set up pipelines and achieve analytics-ready data in minutes. 

All of this combined with transparent pricing and 24×7 support makes us the most loved data pipeline software in terms of user reviews.

Take our 14-day free trial to experience a better way to manage data pipelines.

Get started for Free with Hevo!

3) Benefits of Leveraging AWS NodeJS Lambda 

  • Lower Operating Costs: Serverless Computing can be thought of as an outsourced infrastructure, where you pay someone to manage your servers. The prices are reduced since you are using a service that is used by many other businesses. Because you will spend less time maintaining your machines, you will save money in two areas: Infrastructure and People costs.
  • Lower Scaling Costs: Serverless can save you a lot of money because Horizontal Scalability happens automatically and you only pay for the resources you really utilize. Consider the following scenario: you send weekly marketing/sales emails, and your site’s peak traffic occurs only a few hours after they are delivered.
  • Simple Operational Management: Since your infrastructure’s Auto-Scaling logic is handled by the vendor, you don’t even need to consider Horizontally Scaling your application, you just need to create it in such a way that it can be scaled horizontally.

4) Drawbacks of Leveraging AWS NodeJS Lambda

  • Control and lock-in of vendors: You’ll have to hand up some control of your system to the Cloud provider once you start using a serverless solution. You will almost certainly be affected if there is an outage. There will also be differences in how the FaaS interfaces work if you choose either of the solutions. As a result, moving your codebase from one provider to the other will need code changes.
  • Multitenancy: It describes the circumstance in which numerous clients share the same server. This can lead to issues with Security, Reliability, and Performance (like a high load customer causing another to slow down).

Setting up NodeJS Lambda 

Step 1: Creating an AWS Account

You’ll need an AWS account because every Lambda function requires one. The account requirements are straightforward.

  • Phone number  
  • Valid email address
  • A credit card that works

AWS has a free account tier. For one year, you can utilize practically all of AWS’s services without paying anything. To create your free account, follow the steps below:

  • Go to the AWS Management Console.
  • Click the Create a Free Account button.
  • Fill in your email address and a secure password, of course.
  • Please enter your contact information.
  • Use a valid Credit Card to enter your payment details.
  • Answer Amazon’s phone call to complete the identity verification process.
  • On your browser’s screen, you’ll notice a four-digit number. On your phone’s keypad, type it in.
  • Choose the free assistance option.
  • Congratulations! You can now access your newly created AWS account.

Step 2: Setting up Your Local Development Environment

You’ll utilize the Serverless framework, a NodeJS-based command-line tool for writing and deploying NodeJS Lambda functions. AWS, Microsoft Azure, IBM OpenWhisk, Google Cloud Platform, Kubeless, Spotinst, and others are among the providers it supports. Installing the Serverless framework is simple. You’ll need a NodeJS runtime first. Now you must install a NodeJS version that AWS Lambda supports. In this post, NodeJS 8.10 runtime is used. You’ll also want to make sure that your Local Environment is as similar as possible to the Production Environment. The runtime is included in this.

You can use NVM to install NodeJS 8.10 runtime or transition between versions of NodeJS if you currently have other versions installed.

$ nvm install v8.10

If you wish to transition between NodeJS versions, use this:

$ nvm use 8.10

Install the Serverless framework now that you have the Node.js runtime:

$ npm install -g serverless

Make sure you have the Serverless framework installed.

$ serverless --version
1.40.0

Step 3: Creating a Programmatic User on AWS

Your NodeJS Lambda function will eventually leave your Local Environment. Before the magic can happen, it needs to enter into an AWS environment. This is what is called the Deployment Procedure. A serverless framework that accesses your AWS resources and deploys your services on your behalf is required. You will need a programmatic user account for this. This account will no longer be able to access the AWS console. Rather, it will use API calls to access AWS services, which you will establish shortly.

Below are simple instructions for creating a programmable user.

  • Go to IAM user: in the AWS console.
Image Source
  • To begin the user creation procedure, click Add User:
Step B: NodeJS Lambda
Image Source
  • Type lambda-example-cli as the user name, select the box for programmatic access, and click Next: permissions to continue.
Step C: NodeJS Lambda
Image Source
  • Select Attach current policy directly from the drop-down menu and look for Administrator Access. Check the box next to AdministratorAccess. A policy is an AWS Object that defines a user’s, role’s, or group’s permissions.
Step D: NodeJS Lambda
Image Source
  • Review your options before pressing the Create User button. This screen should be seen.
Step E: NodeJS Lambda
Image Source
  • Download or copy a CSV file containing your access key ID and secret. This must be kept private. Anyone who has these keys can make API requests just like you. They have access to and control over your AWS account.
  • Use your AWS credentials to configure serverless CLI. For deployment, this is required.
serverless config credentials --provider aws --key <your_access_key_id> --secret <your_access_key_secret>

Step 4: Setting up Your First AWS NodeJS Lambda App

Now you’ll make a basic hello world application. Then you’ll build a more advanced program that takes an image from a URL, resizes it, and publishes it to AWS S3, a highly scalable object storage service.

You’ll begin by bootstrapping the project with the Serverless CLI tool:

$ serverless create --template hello-world

You should already have two files created if you performed the command above successfully.

.
├── handler.js
└── serverless.yml

You can use the –template parameter to tell Serverless CLI the templates you wanted to use. The Serverless CLI tool supports a large number of templates. They can be found in this repository.

Deliver Smarter, Faster Insights with your Unified Data!

Using manual scripts and custom code to move data into the warehouse is cumbersome. Changing API endpoints and limits, ad-hoc data preparation, and inconsistent schema makes maintaining such a system a nightmare. Hevo’s reliable no-code data pipeline platform enables you to set up zero-maintenance data pipelines that just work.

  • Wide Range of Connectors: Instantly connect and read data from 150+ sources, including SaaS apps and databases, and precisely control pipeline schedules down to the minute.
  • In-built Transformations: Format your data on the fly with Hevo’s preload transformations using either the drag-and-drop interface or our nifty python interface. Generate analysis-ready data in your warehouse using Hevo’s Postload Transformation 
  • Near Real-Time Replication: – Get access to near real-time replication for all database sources with log-based replication. For SaaS applications, near real-time replication is subject to API limits.   
  • Auto-Schema Management: Correcting improper schema after the data is loaded into your warehouse is challenging. Hevo automatically maps source schema with the destination warehouse so that you don’t face the pain of schema errors.
  • Transparent Pricing: Say goodbye to complex and hidden pricing models. Hevo’s Transparent Pricing brings complete visibility to your ELT spending. Choose a plan based on your business needs. Stay in control with spend alerts and configurable credit limits for unforeseen spikes in the data flow.
  • 24×7 Customer Support: With Hevo, you get more than just a platform, you get a partner for your pipelines. Discover peace with round-the-clock “Live Chat” within the platform. What’s more, you get 24×7 support even during the 14-day free trial.
  • Security: Discover peace with end-to-end encryption and compliance with all major security certifications, including HIPAA, GDPR, and SOC-2.
Sign up here for a 14-Day Free Trial!

Step 5: Deploying Your App

The app is deployed using the deploy parameter. Run the following command from the console:

$ serverless deploy

You’ll see the output in your console once it’s finished. The endpoint is the most crucial thing to remember here.

...
api keys:
  None
endpoints:
  GET - https://ss7n639ye3.execute-api.eu-west-1.amazonaws.com/dev/hello-world
functions:
  helloWorld: serverless-hello-world-dev-helloWorld
...

Your request should be printed back to you if you use your browser to access the endpoint. Congratulations! You’ve just completed your first NodeJS Lambda application.

Step 6: Building an Advanced Code Block

In the last section, you constructed the well-known hello world app. Now it’s time to take things a step further by creating something more complex.

As mentioned before, you’ll create a NodeJS Lambda app that fetches images from a URL, resizes them on the fly, and publishes them to an S3 bucket. You can either alter the previous hello world app or start from scratch with a new project.

You’ll need to make the following changes to serverless.yml:

# filename: serverless.yml
service: ImageUploaderService

# The `provider` block defines where your service will be deployed
custom:
  bucket: getting-started-lambda-example
provider:
  name: aws
  runtime: nodejs8.10
  region: eu-west-1
  stackName: imageUploader
  iamRoleStatements:
    - Effect: "Allow"
      Action:
        - "s3:PutObject" 
      Resource: 
        - "arn:aws:s3:::${self:custom.bucket}/*" 

# The `functions` block defines what code to deploy
functions:
  UploadImage:
    handler: uploadImage.handler
    # The `events` block defines how to trigger the uploadImage.handler code
    events:
      - http:
        path: upload
        method: post
        cors: true
    environment:
      Bucket: ${self:custom.bucket} 
resources:
  Resources:
    StorageBucket:
      Type: "AWS::S3::Bucket"
      Properties:
        BucketName: ${self:custom.bucket}

What you have here is a Custom Object in the YAML file that specifies the bucket’s name. You should use an alternative bucket name; unless you erase the one you used in this example, you won’t be able to utilize it. “An Amazon S3 bucket name is Globally Unique, and the namespace is shared by all AWS accounts,” according to the AWS literature. This means that until a bucket is destroyed, its name cannot be used by another AWS account in any AWS Region.”

If you look closely, you’ll notice that you named the Stack ImageUploader. A Stack is a group of AWS resources that may be managed as a single entity. A global IamRoleStatement was also specified. To access other AWS resources, a NodeJS Lambda function requires authorization. We require permission to write to an S3 bucket in this case. The IAM role declarations offer this permission.

You introduced a new object called environment below the NodeJS Lambda function UploadImage. You can use this to set environment variables obtained through the process.env object during execution. Please keep the handler’s name in mind. Finally, you defined an S3 Bucket Resource that would be used to store the photographs.

Step 7: Adding NPM Packages

You don’t have to start from scratch. In NodeJS Lambda apps, you can use your favorite npm packages. On deployment, they’ll be packed alongside your functions. To generate unique names for photos, you’ll use the npm package uuid, and to manipulate uploaded images, you’ll use jimp.

You’ll need a package,json file:

npm init

Some questions will be asked of you. Just go ahead and respond.

npm install jimp uuid

Let’s change the handler function now. Remember to change the name of the function to uploadImage.js. It’s a good idea to give your function a name that reflects what it does.

// filename: uploadImage.js

"use strict";

const AWS = require("aws-sdk");
const uuid = require("uuid/v4");
const Jimp = require("jimp");
const s3 = new AWS.S3();
const width = 200;
const height = 200;
const imageType = "image/png";
const bucket = process.env.Bucket;

module.exports.handler = (event, context, callback) => {
    let requestBody = JSON.parse(event.body);
    let photoUrl = requestBody.photoUrl;
    let objectId = uuid();
    let objectKey = `resize-${width}x${height}-${objectId}.png`;

    fetchImage(photoUrl)
        .then(image => image.resize(width, height)
            .getBufferAsync(imageType))
        .then(resizedBuffer => uploadToS3(resizedBuffer, objectKey))
        .then(function(response) {
            console.log(`Image ${objectKey} was uploaed and resized`);
            callback(null, {
                statusCode: 200, 
                body: JSON.stringify(response)
            });
        })
        .catch(error => console.log(error));
};

/**
* @param {*} data
* @param {string} key
*/
function uploadToS3(data, key) {
    return s3
        .putObject({
            Bucket: bucket,
            Key: key,
            Body: data,
            ContentType: imageType
        })
        .promise();
}

/**
* @param {url}
* @returns {Promise}
*/
function fetchImage(url) {
    return Jimp.read(url);
)

You have a method fetchImage in uploadImage.js that is responsible for fetching the image from a URL. The readme file for the jimp package contains more information about the package’s inner workings. You used the putObject method in the AWS SDK to upload to the S3 Bucket after resizing.

Step 8: Logging in AWS Lambda Functions

Logging provides insight into how apps perform in the real world. When diagnosing a problem, can save you a lot of time. Many Log Aggregation services, such as Retrace, operate nicely with AWS Cloudwatch and Lambda functions. AWS Lambda automatically monitors and provides metrics on your behalf through Amazon CloudWatch. Total requests, duration, and mistake rates are among these variables. Aside from the monitoring and logging features, the console allows you to log an event directly from your code. Log:

console.log('An error occurred')

When an image is successfully processed and when an error occurs, you can log in to AWS CloudWatch in the handler code (that is, uploadImage.js).

Step 9: Deploying and Monitoring AWS Lambda

You deploy using the serverless deploy command whether you’re upgrading an existing application or creating a new one:

serverless deploy

Your output should look something like this (and remember to note the endpoint):

.....
  None
endpoints:
  POST - https://0sdampzeaj.execute-api.eu-west-1.amazonaws.com/dev/upload
functions:
  UploadImage: ImageUploaderService-dev-UploadImage
layers:

When you send a curl request with the correct request body to this endpoint, the image is downloaded from the URL, resized, and uploaded to an S3 bucket. Make sure the post endpoint matches the one in your console.

curl -H "Content-type: application/json" -d '{"photoUrl":"https://www.petmd.com/sites/default/files/what-does-it-mean-when-cat-wags-tail.jpg"}' 'https://0sdampzeaj.execute-api.eu-west-1.amazonaws.com/dev/upload'

The logs are now visible in CloudWatch, and your photos are stored in an S3 bucket.

Conclusion

In this article, you saw how to implement the NodeJS Lambda. You got a deep understanding of AWS Lambda Resources. In case you want to export data from a source of your choice into your desired Database/destination then Hevo Data is the right choice for you! 

Hevo Data, a No-code Data Pipeline provides you with a consistent and reliable solution to manage data transfer between a variety of sources and a wide variety of Desired Destinations, with a few clicks. Hevo Data with its strong integration with 100+ sources (including 40+ free sources) allows you to not only export data from your desired data sources & load it to the destination of your choice, but also transform & enrich your data to make it analysis-ready so that you can focus on your key business needs and perform insightful analysis using BI tools.

FAQ

How to deploy Node.js Lambda?

To deploy a Node.js Lambda function, write your code, zip the project directory (including node_modules), then upload it via the AWS Lambda console, CLI, or through AWS SAM/Serverless Framework. Set up the required handler and permissions.

Why use the Lambda function?

Lambda functions are useful because they allow you to run code without provisioning or managing servers, automatically scaling to handle varying loads, and only charging for the compute time used, making them cost-effective for event-driven applications.

How to build Lambda layer for Node.js?

To build a Lambda layer for Node.js, package your Node.js dependencies into a nodejs directory, zip the directory, and upload it to AWS Lambda Layers via the AWS console or CLI. You can then attach the layer to your Lambda function to reuse dependencies across functions.

Harsh Varshney
Research Analyst, Hevo Data

Harsh is a data enthusiast with over 2.5 years of experience in research analysis and software development. He is passionate about translating complex technical concepts into clear and engaging content. His expertise in data integration and infrastructure shines through his 100+ published articles, helping data practitioners solve challenges related to data engineering.