Most organizations and Software Developers are preferring serverless applications for a number of reasons. Serverless applications are easy to develop due to the simplicity of their design. They are also more flexible to develop and hazardless to deploy. Due to this, Developers are shifting from traditional server-based applications to serverless applications. However, to some Developers, serverless development is still a new field, and transitioning from the traditional server is a nightmare for them. This is where AWS Chalice comes in.
The good news is that people around the world have come up with effective and useful tools to facilitate this transition. A good example of such a tool is AWS Chalice, a Python microframework developed by AWS Engineers. It helps you to build serverless applications using Python scripts. This article discusses how to deploy AWS Chalice using AWS Cloud.
Table of Contents
What is AWS Chalice?
Image Source: www.github.com
AWS Chalice is a microframework for fast development and deployment of serverless applications created using AWS Lambda functions in Python. With Chalice, you can also integrate other AWS services including Amazon S3, Amazon API Gateway, and Simple Queue Service (SQS).
Other than developing Python applications, AWS Chalice allows you to deploy them easily by providing a command-line interface for building, maintaining, and deploying software. At a first glance, the tool may not seem to be rich in terms of its feature set, but it comes with many functionalities for many applications. You can also use it to generate an automatic IAM policy.
Hevo Data, a Fully-managed Automated Data Pipeline solution, can help you automate, simplify & enrich your data flow from various AWS services such as AWS S3 and AWS Elasticsearch in a matter of minutes. Hevo’s end-to-end Data Management offers streamlined preparation of Data Pipelines for your AWS account. Additionally, Hevo completely automates the process of not only extracting data from AWS S3 and AWS Elasticsearch but also enriching the data and transforming it into an analysis-ready form without having to write a single line of code.
Get Started with Hevo for Free
With Hevo’s out-of-the-box connectors and blazing-fast Data Pipelines, you can extract & aggregate data from 100+ Data Sources (including 40+ Free Sources) including AWS S3 and AWS Elasticsearch straight into your Data Warehouse, Database, or any destination. To further streamline and prepare your data for analysis, you can process and enrich Raw Granular Data using Hevo’s robust & built-in Transformation Layer without writing a single line of code!”
Experience an entirely automated hassle-free Data Pipeline from AWS Services using Hevo. Try our 14-day full access free trial today!
Prerequisites
To deploy AWS Chalice using AWS Cloud, you’ll need a basic understanding of:
How to Deploy AWS Chalice using AWS Cloud?
Image Source: www.aws.amazon.com
In this section, you will be creating a REST API with an Amazon DynamoDB table as the Data Store. AWS Cloud Development Kit (CDK) will be used to deploy the app. Follow the below-mentioned steps to get started.
Installation and Configuration
You should install both CDK and AWS Chalice. CDK was written in TypeScript and requires you to have Node and npm installed.
You can install CDK by running this command:
npm install -g aws-cdk
To know the version of CDK installed on your computer, run this command:
cdk --version
Now, let’s create a Python virtual environment and install AWS Chalice into it. Use Python 3.6 or later:
python3 -m venv demo
Now, let’s activate the environment:
. demo/bin/activate
The following command can help you to install Chalice:
python3 -m pip install chalice
You can confirm whether the installation was successful by running this command:
chalice --version
You can also choose to install an integration of CDK and Chalice using the following command:
python3 -m pip install "chalice[cdk]"
Create New Project
Let’s create a new project using thechalice new-project
command. Give the project a name and choose “[CDK] REST API with DynamoDB backend” for the project.
Image Source: Self
Now, change the directory to the project and issue the tree
command to check whether Chalice has been generated:
cd cdkproject
tree
The application will have 2 level directories, infrastructure, and runtime corresponding to the CDK application and Chalice application. The infrastructure application will allow us to add additional AWS resources that the application needs, while the application code for the lambda functions should go to the runtime directory.
It’s now time to install all the dependencies for the project. You can achieve this by installing the requirements file in the top-level directory of the project:
python3 -m pip install -r requirements.txt
If it’s your first time using the CDK, you should bootstrap your account. Run the following command inside the infrastructure directory:
cd infrastructure
cdk bootstrap
Deploy the Application
Run the following command within the infrastructure directory to deploy the application:
cdk deploy
The command will deploy an AWS Chalice application powered by CDK. Let’s go ahead and test the REST API.
Testing
To test the application, we will issue HTTP requests to the EndpointUrl. You can use httpie to make the HTTP requests from the command line, so let’s install it:
python3 -m pip install httpie
Let’s now make HTTP requests to the EndpointUrl:
http POST https://abcd.execute-api.us-west-2.amazonaws.com/api/users/ username=nics name=nic
http https://abcd.execute-api.us-west-2.amazonaws.com/api/users/nics
Let’s now walk through the code for the sample application.
Loading data from AWS Sources such as AWS S3 and AWS Elasticsearch can be a mammoth task if the right set of tools is not leveraged. Hevo’s No-Code Automated Data Pipeline empowers you with a fully-managed solution for all your data collection, processing, and loading needs. Hevo’s native integration with S3 and Elasticsearch empowers you to transform and load data straight to a Data Warehouse such as Redshift, Snowflake, BigQuery & more!
This data loading lets you effortlessly connect to 100+ Sources (including 40+ free sources) and leverage Hevo’s blazing-fast Data Pipelines to help you seamlessly extract, transform, and load data to your desired destination such as a Data Warehouse.
Our platform has the following in store for you:
- Data Transformations: Best-in-class & Native Support for Complex Data Transformation at fingertips. Code & No-code Fexibilty designed for everyone.
- Smooth Schema Mapping: Fully-managed Automated Schema Management for incoming data with the desired destination.
- Quick Setup: Hevo with its automated features, can be set up in minimal time. Moreover, with its simple and interactive UI, it is extremely easy for new customers to work on and perform operations.
- Transformations: Hevo provides preload transformations to make your incoming data from AWS S3 and AWS Elasticsearch fit for the chosen destination. You can also use drag and drop transformations like Date and Control Functions, JSON, and Event Manipulation to name a few.
- Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
Want to take Hevo for a spin? Sign up here for a 14-day free trial and experience the feature-rich Hevo.
The Sample Application Code
The code for the Lambda event handlers is stored in the runtime/ directory. The code for the REST API has been defined in the runtime/app.py directory as shown below:
import os
import boto3
from chalice import Chalice
app = Chalice(app_name='cdkproject')
dynamodb = boto3.resource('dynamodb')
dynamodb_table = dynamodb.Table(os.environ.get('APP_TABLE_NAME', ''))
@app.route('/users', methods=['POST'])
def create_user():
...
@app.route('/users/{username}', methods=['GET'])
def get_user(username):
The DynamoDB table has been passed through the environment variable APP_TABLE_NAME. You can then create a dynamodb.Table resource with that name. The ../infrastructure/stacks/chaliceapp.py file has the following contents:
import os
from aws_cdk import (
core as cdk,
aws_dynamodb as dynamodb
)
from chalice.cdk import Chalice
RUNTIME_SOURCE_DIR = os.path.join(
os.path.dirname(os.path.dirname(__file__)), os.pardir, 'runtime')
class ChaliceApp(cdk.Stack):
def __init__(self, scope: cdk.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
self.dynamodb_table = self._create_ddb_table()
self.chalice = Chalice(
self, 'ChaliceApp', source_dir=RUNTIME_SOURCE_DIR,
stage_config={
'environment_variables': {
'APP_TABLE_NAME': self.dynamodb_table.table_name
}
}
)
self.dynamodb_table.grant_read_write_data(
self.chalice.get_role('DefaultRole')
)
def _create_ddb_table(self):
dynamodb_table = dynamodb.Table(
self, 'AppTable',
partition_key=dynamodb.Attribute(
name='PK', type=dynamodb.AttributeType.STRING),
sort_key=dynamodb.Attribute(
name='SK', type=dynamodb.AttributeType.STRING
),
removal_policy=cdk.RemovalPolicy.DESTROY)
cdk.CfnOutput(self, 'AppTableName',
value=dynamodb_table.table_name)
return dynamodb_table
The CDK stack uses the Chalice construct of the chalice.cdk package. This means you can pass the CDK resources into the Chalice application. It also means that the Chalice application resources can be referenced within the CDK API.
That is how to deploy AWS Chalice using the AWS Cloud.
Conclusion
This is what you’ve learned in this article:
- Developers are nowadays preferring serverless applications over traditional server-based applications. This can be attributed to the fact that serverless applications are easy to develop and less hazardous to deploy.
- Serverless applications are new to the market, hence, the development of such applications is still a new concept to some developers.
- AWS Chalice makes it easy for developers to build serverless applications using Python Lambda functions.
- Chalice also allows you to deploy serverless applications easily via a command-line interface.
- After developing your Chalice application, you can deploy it using the AWS Cloud Development Kit (CDK) via the
cdk deploy
command.
However, it’s easy to become lost in a blend of data from multiple sources. Imagine trying to make heads or tails of such data. This is where Hevo comes in.
visit our website to explore hevo
Hevo Data with its strong integration with AWS and 100+ Sources allows you to not only export data from multiple sources & load data to the destinations, but also transform & enrich your data, & make it analysis-ready so that you can focus only on your key business needs and perform insightful analysis.
Give Hevo Data a try and sign up for a 14-day free trial today. Hevo offers plans & pricing for different use cases and business needs, check them out!
Share your experience of deploying AWS Chalice using AWS Cloud in the comments section below.