AWS Lambda is a Serverless, Event-Driven Compute Service offered by Amazon as part of Amazon Web Services. It is a computing service that runs code in response to events and maintains the computing resources required by that code automatically.

This article discusses the steps for deploying AWS CodePipeline Lambda in detail. It also gives a brief introduction to AWS Lambda and AWS CodePipeline.

What is AWS Lambda?

AWS Lambda is a serverless compute service that is triggered by events that allow you to run code for virtually any application or backend service without having to provision or manage servers. You can call Lambda from more than 200 AWS services and SaaS apps, and you only pay for what you use.

It also integrates with a variety of other AWS services, including API Gateway, DynamoDB, and RDS, and forms the foundation for AWS Serverless solutions. Lambda is compatible with a wide range of popular languages and runtimes, making it a good fit for Serverless developers.

Key Features of AWS Lambda

  • Concurrency and Scaling Controls: AWS Lambda offers concurrency limits and provisioned concurrency to optimize scalability and responsiveness.
  • Functions Defined as Container Images: Lambda supports container image tooling for seamless development, testing, and deployment.
  • Code Signing: Ensure integrity and trustworthiness by utilizing Lambda’s code signing feature to deploy only authenticated and unaltered code.
  • Lambda Extensions: Enhance Lambda functionality with extensions, facilitating integration with monitoring, observability, security, and governance tools.
  • Blueprints of Function: Access sample code and setup configurations for Node.js and Python run-times to streamline Lambda integration with AWS and third-party services.
  • Database Accessibility: Lambda leverages a database proxy to efficiently manage connections and queries, enabling high concurrency without impacting database resources.
  • Access to File Systems: Mount Amazon EFS file systems to Lambda functions’ local directories, facilitating secure and concurrent access to shared resources.

What is AWS CodePipeline?

AWS CodePipeline is a fully managed continuous delivery service that assists you in automating your release pipelines for quick and reliable application and infrastructure updates. Based on the release model you define, CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change. AWS CodePipeline can be easily integrated with third-party services like GitHub or your custom plugin. You only pay for what you use with AWS CodePipeline. There are no hidden costs or long-term obligations.

CodePipeline is a continuous delivery service that automates the development, testing, and deployment of your software.

Continuous Delivery is a methodology for software development that automates the release process. Every software change is built, tested, and deployed to production automatically.

Continuous Integration is a software development practice in which members of a team use a version control system and frequently merge their work into a central location, such as the main branch.

CodePipeline can use CodeDeploy, AWS Elastic Beanstalk, or AWS OpsWorks Stacks to deploy applications to EC2 instances. CodePipeline can also use Amazon ECS to deploy container-based applications to services. Developers can also plug in other tools or services, such as build services, test providers, or other deployment targets or systems, using the integration points provided by CodePipeline.

Deploying CodePipeline Lambda

AWS CodePipeline is a service that lets you build continuous delivery pipelines for AWS-based applications. To deploy your Lambda application, you can construct a pipeline. You can also configure the pipeline to call a Lambda function to complete a task when the pipeline runs. Lambda builds a pipeline that includes source, makes, and deploy stages when you create a Lambda application in the Lambda console.

Your function is called asynchronously by CodePipeline, which sends an event with information about the job. An event from a pipeline called my-function is shown in the following example.

Example CodePipeline Event

{
    "CodePipeline.job": {
        "id": "c0d76431-b0e7-xmpl-97e3-e8ee786eb6f6",
        "accountId": "123456789012",
        "data": {
            "actionConfiguration": {
                "configuration": {
                    "FunctionName": "my-function",
                    "UserParameters": "{"KEY": "VALUE"}"
                }
            },
            "inputArtifacts": [
                {
                    "name": "my-pipeline-SourceArtifact",
                    "revision": "e0c7xmpl2308ca3071aa7bab414de234ab52eea",
                    "location": {
                        "type": "S3",
                        "s3Location": {
                            "bucketName": "us-west-2-123456789012-my-pipeline",
                            "objectKey": "my-pipeline/test-api-2/TdOSFRV"
                        }
                    }
                }
            ],
            "outputArtifacts": [
                {
                    "name": "invokeOutput",
                    "revision": null,
                    "location": {
                        "type": "S3",
                        "s3Location": {
                            "bucketName": "us-west-2-123456789012-my-pipeline",
                            "objectKey": "my-pipeline/invokeOutp/D0YHsJn"
                        }
                    }
                }
            ],
            "artifactCredentials": {
                "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
                "secretAccessKey": "6CGtmAa3lzWtV7a...",
                "sessionToken": "IQoJb3JpZ2luX2VjEA...",
                "expirationTime": 1575493418000
            }
        }
    }
}

The function must call the CodePipeline API to indicate success or failure to complete the task. The PutJobSuccessResult operation in Node.js is used to signal success in the following example. From the event object, it obtains the API call’s Job ID.

Example index.js

var AWS = require('aws-sdk')
var codepipeline = new AWS.CodePipeline()

exports.handler = async (event) => {
    console.log(JSON.stringify(event, null, 2))
    var jobId = event["CodePipeline.job"].id
    var params = {
        jobId: jobId
    }
    return codepipeline.putJobSuccessResult(params).promise()
}

Lambda queues the message for asynchronous invocation and tries again if your function fails. Set a destination for events that your function couldn’t handle.

For your Lambda application, you can use AWS CodePipeline to build a continuous delivery pipeline. When you make a change to your application’s source code, CodePipeline combines source control, build, and deployment resources to create a pipeline that runs every time you make a change.

Permissions

A CodePipeline Lambda pipeline must have permission to use the following API operations to call a function:

  • ListFunctions
  • InvokeFunction

These permissions are included in the pipeline service role by default.

The function requires the following permissions in its execution role to complete a job:

  • codepipeline:PutJobSuccessResult
  • codepipeline:PutJobFailureResult

AWSCodePipelineCustomActionAccess is a managed policy that includes these permissions.

Steps to follow to create Codepipeline Using Lambda Functions are:

Create a Pipeline

This is the first step in getting CodePipeline Lambda up and running. This step involves creating a pipeline to which the Lambda function will be added later. You can skip this step if that pipeline is still configured for your account and is in the same Region where you plan to create the Lambda function.

To create the pipeline, follow the steps below:

  • Step 1: Create a two-stage pipeline, an Amazon S3 bucket, and CodeDeploy resources. For your instance types, select Amazon Linux. The pipeline can be called anything you want, but the steps in this topic use MyLambdaTestPipeline.
  • Step 2: Select Details from the CodeDeploy action on the status page for your pipeline. Select an Instance ID from the drop-down menu on the deployment group’s details page.
  • Step 3: Copy the IP address from the Public IPv4 address in the Amazon EC2 Console’s Details tab for the instance (for example, 192.0.2.4). In AWS Lambda, the function’s target is this address.

The Lambda permissions required to invoke the function are part of the CodePipeline Lambda default Service Role policy. If you changed the default service role or chose a different one, make sure the lambda:InvokeFunction and lambda:ListFunctions permissions are allowed in the policy for the role. Pipelines with Lambda actions will fail if this is not done.

Solve your data replication problems with Hevo’s reliable, no-code, automated pipelines with 150+ connectors.
Get your free trial right away!

Create the Lambda Function

Create a Lambda function that makes an HTTP Request and looks for a line of text on a webpage in this step. You must also create an IAM Policy and a Lambda execution role as part of this step.

To create the execution role, follow the steps given below:

  • Step 1: Log in to the AWS Management Console and go to https://console.aws.amazon.com/iam/ to access the IAM console.
  • Step 2: Then select Policies, and then Create Policy. Select the JSON tab, then paste the policy below into the field.
{
  "Version": "2012-10-17", 
  "Statement": [
    {
      "Action": [ 
        "logs:*"
      ],
      "Effect": "Allow", 
      "Resource": "arn:aws:logs:*:*:*"
    },
    {
      "Action": [
        "codepipeline:PutJobSuccessResult",
        "codepipeline:PutJobFailureResult"
        ],
        "Effect": "Allow",
        "Resource": "*"
     }
  ]
} 
  • Step 3: Select Policy Review.
  • Step 4: Type a name for the policy in the Name field on the Review Policy page (for example, CodePipelineLambdaExecPolicy). Enter Enables Lambda to execute code in the Description field. Select Create Policy from the drop-down menu. These are the minimum permissions needed for a Lambda function to work with Amazon CloudWatch and CodePipeline. You should modify this policy to allow the actions required by those Lambda functions if you want to expand this policy to allow functions that interact with other AWS resources.
  • Step 5: Choose Roles, then Create Role from the Policy Dashboard page.
  • Step 6: Choose the AWS service on the Create Role page. Choose Lambda, and then Permissions: Next.
  • Step 7: Select the check box next to CodePipelineLambdaExecPolicy on the Attach Permissions Policies page, and then choose Next: Tags. Next, select Review.
  • Step 8: Enter a name for the role in Role Name on the Review page, then choose Create Role.

To create the sample Function, follow the steps below:

  • Step 1: Open the AWS Lambda console at https://console.aws.amazon.com/lambda/ after logging in to the AWS Management Console.
  • Step 2: Choose Create Function from the Functions page.
  • Step 3: Select Author from scratch from the Create Function page. Give your Lambda function a name in the Function Name field (for example, MyLambdaFunctionForAWSCodePipeline). Select Node.js 10.x in the Runtime window.
  • Step 4: Choose an Existing Role from the Role menu. Choose your role in the Existing role, then Create Function from the drop-down menu. Your newly created function’s Detail page appears.
  • Step 5: The job details are stored in the event object under the CodePipeline.job key. Fill in the Function Code box with the following code:
var assert = require('assert');
var AWS = require('aws-sdk');
var http = require('http');

exports.handler = function(event, context) {

    var codepipeline = new AWS.CodePipeline();
    
    // Retrieve the Job ID from the Lambda action
    var jobId = event["CodePipeline.job"].id;
    
    // Retrieve the value of UserParameters from the Lambda action configuration in CodePipeline Lambda, in this case a URL which will be
    // health checked by this function.
    var url = event["CodePipeline.job"].data.actionConfiguration.configuration.UserParameters; 
    
    // Notify CodePipeline Lambda of a successful job
    var putJobSuccess = function(message) {
        var params = {
            jobId: jobId
        };
        codepipeline.putJobSuccessResult(params, function(err, data) {
            if(err) {
                context.fail(err);      
            } else {
                context.succeed(message);      
            }
        });
    };
    
    // Notify CodePipeline Lambda of a failed job
    var putJobFailure = function(message) {
        var params = {
            jobId: jobId,
            failureDetails: {
                message: JSON.stringify(message),
                type: 'JobFailed',
                externalExecutionId: context.awsRequestId
            }
        };
        codepipeline.putJobFailureResult(params, function(err, data) {
            context.fail(message);      
        });
    };
    
    // Validate the URL passed in UserParameters
    if(!url || url.indexOf('http://') === -1) {
        putJobFailure('The UserParameters field must contain a valid URL address to test, including http:// or https://');  
        return;
    }
    
    // Helper function to make a HTTP GET request to the page.
    // The helper will test the response and succeed or fail the job accordingly 
    var getPage = function(url, callback) {
        var pageObject = {
            body: '',
            statusCode: 0,
            contains: function(search) {
                return this.body.indexOf(search) > -1;    
            }
        };
        http.get(url, function(response) {
            pageObject.body = '';
            pageObject.statusCode = response.statusCode;
            
            response.on('data', function (chunk) {
                pageObject.body += chunk;
            });
            
            response.on('end', function () {
                callback(pageObject);
            });
            
            response.resume(); 
        }).on('error', function(error) {
            // Fail the job if our request failed
            putJobFailure(error);    
        });           
    };
    
    getPage(url, function(returnedPage) {
        try {
            // Check if the HTTP response has a 200 status
            assert(returnedPage.statusCode === 200);
            // Check if the page contains the text "Congratulations"
            // You can change this to check for different text, or add other tests as required
            assert(returnedPage.contains('Congratulations'));  
            
            // Succeed the job
            putJobSuccess("Tests passed.");
        } catch (ex) {
            // If any of the assertions failed then fail the job
            putJobFailure(ex);    
        }
    });     
};
  • Step 6: Leave Handler and Role at their default values, CodePipelineLambdaExecRole and CodePipelineLambdaExecRole and CodePipelineLambdaExecRole and CodePipelineLambdaEx.
  • Step 7: Timeout should be set to 20 seconds in the Basic settings.
  • Step 8: Select Save.

Add Lambda Function in CodePipeline Console

This step involves adding a new stage to your pipeline and then adding a Lambda action to that stage that calls your function.

To add a stage, follow these steps:

  • Step 1: Log in to the AWS Management Console and go to http://console.aws.amazon.com/codesuite/codepipeline/home to access the CodePipeline Console.
  • Step 2: Choose the pipeline you created on the Welcome page.
  • Step 3: Select Edit from the Pipeline View page.
  • Step 4: Choose + Add Stage on the Edit page to add a stage after the CodeDeploy action’s Deployment stage. Choose Add Stage and give the stage a name (such as LambdaStage). You can also add your Lambda action to a stage that already exists. You’re adding the Lambda function as the only action in a stage for demonstration purposes, so you can easily track its progress as artifacts move through a pipeline.
  • Step 5: Select the + Add Action Group option. Enter a name for your Lambda action in the Action name field in Edit action (for example, MyLambdaAction). Select AWS Lambda from the Provider drop-down menu. Choose or type the name of your Lambda function in Function name (for example, MyLambdaFunctionForAWSCodePipeline). Choose Done after entering the IP address of the Amazon EC2 instance you copied earlier in User parameters (for example, http://192.0.2.4). In a real-world scenario, you could provide your registered website name instead of an IP address (for example, http://www.example.com). 
  • Step 6: Choose Save from the Edit Action page.

Test the Pipeline

Release the most recent change through the pipeline to test the function.

Follow these steps to use the console to run the most recent version of an artifact through a pipeline:

  • Step 1: Choose Release Change from the Pipeline Details page. This runs through the pipeline the most recent revision available in each source location specified in a source action.
  • Step 2: When the Lambda action is finished, click the Details link to see the function’s log stream in Amazon CloudWatch, which includes the event’s billed duration. If a function fails, the CloudWatch log records the reason for the failure.

Next Steps

You can try the following now that you’ve created a Lambda function and added it as an action in a pipeline:

  • To check other websites, add additional Lambda actions to your stage.
  • To check for a different text string, modify the Lambda function.
  • Create and add your Lambda functions to pipelines after learning about them.
  • To avoid potential charges, consider removing the Lambda function from your pipeline, deleting it from AWS Lambda, and deleting the role from IAM after you’ve finished experimenting with it.

Example JSON Event 

A sample JSON Event sent by CodePipeline Lambda is shown in the example below. This event has the same structure as the GetJobDetails API response, but it lacks the actionTypeId and pipelineContext data types. Both the JSON Event and the response to the GetJobDetails API contain two action configuration details: FunctionName and UserParameters. The numbers in the red italic text are just that: examples or explanations, not actual numbers.

{
    "CodePipeline.job": {
        "id": "11111111-abcd-1111-abcd-111111abcdef",
        "accountId": "111111111111",
        "data": {
            "actionConfiguration": {
                "configuration": {
                    "FunctionName": "MyLambdaFunctionForAWSCodePipeline",
                    "UserParameters": "some-input-such-as-a-URL"
                }
            },
            "inputArtifacts": [
                {
                    "location": {
                        "s3Location": {
                            "bucketName": "the name of the bucket configured as the pipeline artifact store in Amazon S3, for example codepipeline-us-east-2-1234567890",
                            "objectKey": "the name of the application, for example CodePipelineDemoApplication.zip"
                        },
                        "type": "S3"
                    },
                    "revision": null,
                    "name": "ArtifactName"
                }
            ],
            "outputArtifacts": [],
            "artifactCredentials": {
                "secretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
                "sessionToken": "MIICiTCCAfICCQD6m7oRw0uXOjANBgkqhkiG9w
 0BAQUFADCBiDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwdTZ
 WF0dGxlMQ8wDQYDVQQKEwZBbWF6b24xFDASBgNVBAsTC0lBTSBDb25zb2xlMRIw
 EAYDVQQDEwlUZXN0Q2lsYWMxHzAdBgkqhkiG9w0BCQEWEG5vb25lQGFtYXpvbi5
 jb20wHhcNMTEwNDI1MjA0NTIxWhcNMTIwNDI0MjA0NTIxWjCBiDELMAkGA1UEBh
 MCVVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBb
 WF6b24xFDASBgNVBAsTC0lBTSBDb25zb2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMx
 HzAdBgkqhkiG9w0BCQEWEG5vb25lQGFtYXpvbi5jb20wgZ8wDQYJKoZIhvcNAQE
 BBQADgY0AMIGJAoGBAMaK0dn+a4GmWIWJ21uUSfwfEvySWtC2XADZ4nB+BLYgVI
 k60CpiwsZ3G93vUEIO3IyNoH/f0wYK8m9TrDHudUZg3qX4waLG5M43q7Wgc/MbQ
 ITxOUSQv7c7ugFFDzQGBzZswY6786m86gpEIbb3OhjZnzcvQAaRHhdlQWIMm2nr
 AgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAtCu4nUhVVxYUntneD9+h8Mg9q6q+auN
 KyExzyLwaxlAoo7TJHidbtS4J5iNmZgXL0FkbFFBjvSfpJIlJ00zbhNYS5f6Guo
 EDmFJl0ZxBHjJnyp378OD8uTs7fLvjx79LjSTbNYiytVbZPQUQ5Yaxu2jXnimvw
 3rrszlaEXAMPLE=",
                "accessKeyId": "AKIAIOSFODNN7EXAMPLE"
            },
            "continuationToken": "A continuation token if continuing job",
            "encryptionKey": { 
              "id": "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab",
              "type": "KMS"
            }
        }
    }
}

Sample Functions

The sample Lambda function below demonstrates additional functionality available in CodePipeline Lambda for your pipelines. You may need to change the policy for the Lambda execution role to use these functions, as noted in the introduction to each sample.

Sample Python function that uses an AWS CloudFormation template

A function that creates or updates a stack using an AWS CloudFormation template is shown in the following example. An Amazon S3 bucket is created by the template. To keep costs low, it’s only for demonstration purposes. Before uploading anything to the bucket, it’s best to delete the stack first. When you delete the stack, you won’t be able to delete the bucket because you’ve uploaded files to it. Before you can delete the bucket, you must manually remove everything from it.

This Python example assumes that you already have a pipeline with an Amazon S3 bucket as a source action, or that you have access to a versioned Amazon S3 bucket that you can use with the pipeline. You create the AWS CloudFormation template, compress it, and save it as an a.zip file to that bucket. Then, in your pipeline, add a source action that retrieves the .zip file from the bucket.

As shown in this example:

  • Multiple configuration values are passed to the function (get user params) using JSON-encoded user parameters.
  • The get template interaction with.zip artifacts in an artifact bucket.
  • The use of a continuation token (continue job later) to monitor a long-running asynchronous process. This enables the action to continue and the function to complete even if the runtime exceeds fifteen minutes (a limit in Lambda).

To use this sample Lambda function, the Lambda execution role’s policy must include Allow permissions in AWS CloudFormation, Amazon S3, and CodePipeline, as shown in this sample policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "logs:*"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:logs:*:*:*"
        },
        {
            "Action": [
                "codepipeline:PutJobSuccessResult",
                "codepipeline:PutJobFailureResult"
            ],
            "Effect": "Allow",
            "Resource": "*"
        },
        {
            "Action": [
                "cloudformation:DescribeStacks",
                "cloudformation:CreateStack",
                "cloudformation:UpdateStack"
            ],
            "Effect": "Allow",
            "Resource": "*"
        },
        {
            "Action": [
                "s3:*"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}
Open any plain-text editor and paste the following code into it to create the AWS CloudFormation template:
{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Description" : "CloudFormation template which creates an S3 bucket",
  "Resources" : {
    "MySampleBucket" : {
      "Type" : "AWS::S3::Bucket",
      "Properties" : {
      }
    }
  },
  "Outputs" : {
    "BucketName" : {
      "Value" : { "Ref" : "MySampleBucket" },
      "Description" : "The name of the S3 bucket"
    }
  } 
}

Save this as a JSON file named template.json in the template-package directory. Create a compressed (.zip) file named template-package.zip from this directory and file, and upload it to a versioned Amazon S3 bucket. You can use a bucket that you’ve already set up for your pipeline. Next, add a source action to your pipeline that retrieves the.zip file. This action’s output will be called MyTemplate.

To add the following code like a function in Lambda, follow the steps below:

  • Step 1: Select Create Function from the Lambda Console.
  • Step 2: Choose Author from scratch on the Create Function page. Enter a name for your Lambda function in Function Name.
  • Step 3: Choose Python 2.7 from the Runtime menu.
  • Step 4: Select Use an Existing Role under Choose or create an Execution Role. Choose your role in the Existing Role, and then Create Function. The detail page for the function you just created appears.
  • Step 5: In the Function Code box, paste the following code:
from __future__ import print_function
from boto3.session import Session

import json
import urllib
import boto3
import zipfile
import tempfile
import botocore
import traceback

print('Loading function')

cf = boto3.client('cloudformation')
code_pipeline = boto3.client('codepipeline')

def find_artifact(artifacts, name):
    """Finds the artifact 'name' among the 'artifacts'
    
    Args:
        artifacts: The list of artifacts available to the function
        name: The artifact we wish to use
    Returns:
        The artifact dictionary found
    Raises:
        Exception: If no matching artifact is found
    
    """
    for artifact in artifacts:
        if artifact['name'] == name:
            return artifact
            
    raise Exception('Input artifact named "{0}" not found in event'.format(name))

def get_template(s3, artifact, file_in_zip):
    """Gets the template artifact
    
    Downloads the artifact from the S3 artifact store to a temporary file
    then extracts the zip and returns the file containing the CloudFormation
    template.
    
    Args:
        artifact: The artifact to download
        file_in_zip: The path to the file within the zip containing the template
        
    Returns:
        The CloudFormation template as a string
        
    Raises:
        Exception: Any exception thrown while downloading the artifact or unzipping it
    
    """
    tmp_file = tempfile.NamedTemporaryFile()
    bucket = artifact['location']['s3Location']['bucketName']
    key = artifact['location']['s3Location']['objectKey']
    
    with tempfile.NamedTemporaryFile() as tmp_file:
        s3.download_file(bucket, key, tmp_file.name)
        with zipfile.ZipFile(tmp_file.name, 'r') as zip:
            return zip.read(file_in_zip)   
   
def update_stack(stack, template):
    """Start a CloudFormation stack update
    
    Args:
        stack: The stack to update
        template: The template to apply
        
    Returns:
        True if an update was started, false if there were no changes
        to the template since the last update.
        
    Raises:
        Exception: Any exception besides "No updates are to be performed."
    
    """
    try:
        cf.update_stack(StackName=stack, TemplateBody=template)
        return True
        
    except botocore.exceptions.ClientError as e:
        if e.response['Error']['Message'] == 'No updates are to be performed.':
            return False
        else:
            raise Exception('Error updating CloudFormation stack "{0}"'.format(stack), e)

def stack_exists(stack):
    """Check if a stack exists or not
    
    Args:
        stack: The stack to check
        
    Returns:
        True or False depending on whether the stack exists
        
    Raises:
        Any exceptions raised .describe_stacks() besides that
        the stack doesn't exist.
        
    """
    try:
        cf.describe_stacks(StackName=stack)
        return True
    except botocore.exceptions.ClientError as e:
        if "does not exist" in e.response['Error']['Message']:
            return False
        else:
            raise e

def create_stack(stack, template):
    """Starts a new CloudFormation stack creation
    
    Args:
        stack: The stack to be created
        template: The template for the stack to be created with
        
    Throws:
        Exception: Any exception thrown by .create_stack()
    """
    cf.create_stack(StackName=stack, TemplateBody=template)
 
def get_stack_status(stack):
    """Get the status of an existing CloudFormation stack
    
    Args:
        stack: The name of the stack to check
        
    Returns:
        The CloudFormation status string of the stack such as CREATE_COMPLETE
        
    Raises:
        Exception: Any exception thrown by .describe_stacks()
        
    """
    stack_description = cf.describe_stacks(StackName=stack)
    return stack_description['Stacks'][0]['StackStatus']
  
def put_job_success(job, message):
    """Notify CodePipeline Lambda of a successful job
    
    Args:
        job: The CodePipeline job ID
        message: A message to be logged relating to the job status
        
    Raises:
        Exception: Any exception thrown by .put_job_success_result()
    
    """
    print('Putting job success')
    print(message)
    code_pipeline.put_job_success_result(jobId=job)
  
def put_job_failure(job, message):
    """Notify CodePipeline Lambda of a failed job
    
    Args:
        job: The CodePipeline Lambda job ID
        message: A message to be logged relating to the job status
        
    Raises:
        Exception: Any exception thrown by .put_job_failure_result()
    
    """
    print('Putting job failure')
    print(message)
    code_pipeline.put_job_failure_result(jobId=job, failureDetails={'message': message, 'type': 'JobFailed'})
 
def continue_job_later(job, message):
    """Notify CodePipeline Lambda of a continuing job
    
    This will cause CodePipeline to invoke the function again with the
    supplied continuation token.
    
    Args:
        job: The JobID
        message: A message to be logged relating to the job status
        continuation_token: The continuation token
        
    Raises:
        Exception: Any exception thrown by .put_job_success_result()
    
    """
    
    # Use the continuation token to keep track of any job execution state
    # This data will be available when a new job is scheduled to continue the current execution
    continuation_token = json.dumps({'previous_job_id': job})
    
    print('Putting job continuation')
    print(message)
    code_pipeline.put_job_success_result(jobId=job, continuationToken=continuation_token)

def start_update_or_create(job_id, stack, template):
    """Starts the stack update or create process
    
    If the stack exists then update, otherwise create.
    
    Args:
        job_id: The ID of the CodePipeline job
        stack: The stack to create or update
        template: The template to create/update the stack with
    
    """
    if stack_exists(stack):
        status = get_stack_status(stack)
        if status not in ['CREATE_COMPLETE', 'ROLLBACK_COMPLETE', 'UPDATE_COMPLETE']:
            # If the CloudFormation stack is not in a state where
            # it can be updated again then fail the job right away.
            put_job_failure(job_id, 'Stack cannot be updated when status is: ' + status)
            return
        
        were_updates = update_stack(stack, template)
        
        if were_updates:
            # If there were updates then continue the job so it can monitor
            # the progress of the update.
            continue_job_later(job_id, 'Stack update started')  
            
        else:
            # If there were no updates then succeed the job immediately 
            put_job_success(job_id, 'There were no stack updates')    
    else:
        # If the stack doesn't already exist then create it instead
        # of updating it.
        create_stack(stack, template)
        # Continue the job so the pipeline will wait for the CloudFormation
        # stack to be created.
        continue_job_later(job_id, 'Stack create started') 

def check_stack_update_status(job_id, stack):
    """Monitor an already-running CloudFormation update/create
    
    Succeeds, fails or continues the job depending on the stack status.
    
    Args:
        job_id: The CodePipeline job ID
        stack: The stack to monitor
    
    """
    status = get_stack_status(stack)
    if status in ['UPDATE_COMPLETE', 'CREATE_COMPLETE']:
        # If the update/create finished successfully then
        # succeed the job and don't continue.
        put_job_success(job_id, 'Stack update complete')
        
    elif status in ['UPDATE_IN_PROGRESS', 'UPDATE_ROLLBACK_IN_PROGRESS', 
    'UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS', 'CREATE_IN_PROGRESS', 
    'ROLLBACK_IN_PROGRESS']:
        # If the job isn't finished yet then continue it
        continue_job_later(job_id, 'Stack update still in progress') 
       
    else:
        # If the Stack is a state which isn't "in progress" or "complete"
        # then the stack update/create has failed so end the job with
        # a failed result.
        put_job_failure(job_id, 'Update failed: ' + status)

def get_user_params(job_data):
    """Decodes the JSON user parameters and validates the required properties.
    
    Args:
        job_data: The job data structure containing the UserParameters string which should be a valid JSON structure
        
    Returns:
        The JSON parameters decoded as a dictionary.
        
    Raises:
        Exception: The JSON can't be decoded or a property is missing.
        
    """
    try:
        # Get the user parameters which contain the stack, artifact and file settings
        user_parameters = job_data['actionConfiguration']['configuration']['UserParameters']
        decoded_parameters = json.loads(user_parameters)
            
    except Exception as e:
        # We're expecting the user parameters to be encoded as JSON
        # so we can pass multiple values. If the JSON can't be decoded
        # then fail the job with a helpful message.
        raise Exception('UserParameters could not be decoded as JSON')
    
    if 'stack' not in decoded_parameters:
        # Validate that the stack is provided, otherwise fail the job
        # with a helpful message.
        raise Exception('Your UserParameters JSON must include the stack name')
    
    if 'artifact' not in decoded_parameters:
        # Validate that the artifact name is provided, otherwise fail the job
        # with a helpful message.
        raise Exception('Your UserParameters JSON must include the artifact name')
    
    if 'file' not in decoded_parameters:
        # Validate that the template file is provided, otherwise fail the job
        # with a helpful message.
        raise Exception('Your UserParameters JSON must include the template file name')
    
    return decoded_parameters
    
def setup_s3_client(job_data):
    """Creates an S3 client
    
    Uses the credentials passed in the event by CodePipeline. These
    credentials can be used to access the artifact bucket.
    
    Args:
        job_data: The job data structure
        
    Returns:
        An S3 client with the appropriate credentials
        
    """
    key_id = job_data['artifactCredentials']['accessKeyId']
    key_secret = job_data['artifactCredentials']['secretAccessKey']
    session_token = job_data['artifactCredentials']['sessionToken']
    
    session = Session(aws_access_key_id=key_id,
        aws_secret_access_key=key_secret,
        aws_session_token=session_token)
    return session.client('s3', config=botocore.client.Config(signature_version='s3v4'))

def lambda_handler(event, context):
    """The Lambda function handler
    
    If a continuing job then checks the CloudFormation stack status
    and updates the job accordingly.
    
    If a new job then kick of an update or creation of the target
    CloudFormation stack.
    
    Args:
        event: The event passed by Lambda
        context: The context passed by Lambda
        
    """
    try:
        # Extract the Job ID
        job_id = event['CodePipeline.job']['id']
        
        # Extract the Job Data 
        job_data = event['CodePipeline.job']['data']
        
        # Extract the params
        params = get_user_params(job_data)
        
        # Get the list of artifacts passed to the function
        artifacts = job_data['inputArtifacts']
        
        stack = params['stack']
        artifact = params['artifact']
        template_file = params['file']
        
        if 'continuationToken' in job_data:
            # If we're continuing then the create/update has already been triggered
            # we just need to check if it has finished.
            check_stack_update_status(job_id, stack)
        else:
            # Get the artifact details
            artifact_data = find_artifact(artifacts, artifact)
            # Get S3 client to access artifact with
            s3 = setup_s3_client(job_data)
            # Get the JSON template file out of the artifact
            template = get_template(s3, artifact_data, template_file)
            # Kick off a stack update or create
            start_update_or_create(job_id, stack, template)  

    except Exception as e:
        # If any other exceptions which we didn't expect are raised
        # then fail the job and log the exception message.
        print('Function failed due to exception.') 
        print(e)
        traceback.print_exc()
        put_job_failure(job_id, 'Function exception: ' + str(e))
      
    print('Function complete.')   
    return "Complete."
  • Step 6: Leave Handler as is, and Role as CodePipelineLambdaExecRole, the name you chose or created earlier.
  • Step 7: Replace the default of 3 seconds with 20 in the Basic settings for Timeout.
  • Step 8: Select the option Save.
  • Step 9: Edit the pipeline in the CodePipeline Console to include the function as an action in a stage. Choose Add Action Group from the Edit menu for the pipeline stage you want to change. Enter a name for your action in the Action Name field on the Edit Action page. Select Lambda from the Action Provider drop-down menu. Select MyTemplate from the list of input artifacts. You must provide a JSON string with three parameters in UserParameters:
    • Stack name
    • AWS CloudFormation template name and path to the file
    • Input artifact

Use curly brackets () to separate the parameters, and commas to separate them. For example, in UserParameters, enter “stack”:”MyTestStack”,”file”:”template-package/template.json”,”artifact”:”MyTemplate” to create a stack named MyTestStack for a pipeline with the input artifact MyTemplate.

  • Step 10: Save your changes to the pipeline, then test the action and Lambda function by manually releasing a change.

Conclusion

This blog talks about the step-by-step process required to deploy AWS CodePipeline Lambda. It explains how Lambda Functions can be deployed and run using CodePipeline. In addition to that, it also gives an overview of AWS Lambda and AWS CodePipeline.

Hevo Data, a No-code Data Pipeline provides you with a consistent and reliable solution to manage data transfer between a variety of sources and a wide variety of Desired Destinations, with a few clicks. Hevo Data with its strong integration with 150+ sources (including 40+ free sources) allows you to not only export data from your desired data sources & load it to the destination of your choice (such as AWS Redshift), but also transform & enrich your data to make it analysis-ready so that you can focus on your key business needs and perform insightful analysis using BI tools.

Want to take Hevo for a spin? 

Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at the unbeatable pricing that will help you choose the right plan for your business needs.

Skand Agrawal
Customer Experience Engineer, Hevo Data

Skand is a dedicated Customer Experience Engineer at Hevo Data, specializing in MySQL, Postgres, and REST APIs. With three years of experience, he efficiently troubleshoots customer issues, contributes to the knowledge base and SOPs, and assists customers in achieving their use cases through Hevo's platform.

No-code Data Pipeline For Your Data Warehouse