Locking down AWS Serverless applications, the right way

March 14, 2019

Since the debut of AWS Lambda in 2014, the Functions-as-a-Service (or "Serverless") platform for backend services, has soared in popularity. At the forefront of this rise is the Serverless Framework, which offers exceptional development and operational ergonomics, especially when targeting AWS Lambda. However, crafting secure, locked-down cloud infrastructures around these applications remains an often overlooked challenge.

In this post, we will review the basics of IAM permissions in AWS Serverless Framework applications, tour some existing solutions, and then present the terraform-aws-serverless collection of modules. We will explore how to integrate the terraform-aws-serverless project to provide specialized roles for your development team, isolate and lock-down application privileges, and support many commonly-encountered development scenarios.

The problem

Getting a Serverless Framework up and running in AWS is straightforward and well documented. Essentially, you write your code, add a few fields to serverless.yml, and you're up in the cloud with just some CLI commands!

But, beneath this happy, seemingly "zero configuration" world lies potential pitfalls and hidden complexity. The question we really need to ask is this:

Is my Serverless Framework deployment secure?

Under the hood, the Serverless Framework creates many AWS resources beyond just the expected Lambdas and API Gateways. A large number of different permissions are required to deploy these resources. Consequently, many developers will feel pressure to take a shortcut and deploy Serverless Framework applications with an AWS superuser -- a potentially dangerous practice. If those credentials are compromised, every AWS resource in the same account could be at risk ranging from sensitive customer datastores to production web applications. Scary!

Existing privilege approaches

There are number of ways to potentially lock down Serverless Framework applications on AWS. Let's review a few of the most common:

Single account, AWS superuser:

This is the easiest and riskiest option. You have one account and a superuser that controls all AWS resources (production databases, apps, etc.) is used to deploy Serverless Framework applications. If something goes wrong, everything everywhere is at risk.

Multiple accounts, AWS superusers:

For a Serverless Framework application this usually means a different account for each stage (e.g., development, staging, production) of each separate application project (e.g., everything controlled within one serverless.yml file). If a given AWS superuser is compromised, the damage is limited to just the Serverless project at that specific stage. Much, much better security!

But, this creates an infrastructure hassle when applications need to share resources or communicate across a project or stage boundary. Just maintaining multiple accounts and superusers can be an operational burden. Accordingly, we'll sometimes see tradeoffs of security vs. maintainability with intermediate schemes such as single accounts handling all applications and resources for a single stage, so that, e.g., a compromise of development doesn't threaten the production environment.

Single account, limited AWS IAM users:

This approach entails setting up IAM policies tailored to exactly what an AWS IAM user needs for Serverless Framework development and nothing more. Ideally, in this scenario IAM policies isolate resource references distinctly for each of the following factors:

  • Service: A single Serverless Framework project (e.g., all the resources defined in one serverless.yml configuration file). Loosely, this corresponds to the service field in a serverless.yml file.
  • Stage: The deployment environment, corresponding to the provider.stage entry in a serverless.yml file. Usually, something like development, staging, and production.
  • Role: Some notion of different "types" of users to distinguish ones who can create and delete the entire Serverless Framework application from those who can merely update existing applications.

By way of a little backstory, as Formidable became more and more involved with Serverless Framework applications, we scoured the community ecosystem for resources that might allow us to set up permissions along these lines. Unfortunately, we came up empty. While a few noble projects and articles attempted to provide guidance for limited Serverless Framework IAM privileges, we typically found some combination of the following shortcomings:

  1. Failing to lock down each IAM resource to the maximum extent possible. Most existing projects/articles in this area have some number of overly-permissive wildcards ("*") on AWS resources that could be locked down to at least service and stage.
  2. Not handling all Serverless Framework-specific variations (e.g., a Serverless Framework application has a potentially truncated S3 deployment bucket name).
  3. Providing only one set of all-or-nothing permissions rather than offering more nuanced roles along boundaries of creating/deleting vs. merely updating Serverless Framework resources.

So, after double-checking that none of the off-the-shelf solutions met our needs, we decided to write our own.

Introducing the terraform-aws-serverless project

Over the past year our team has conducted extensive research to distill a collection of the absolute minimum IAM privileges that can support the Serverless Framework. With this set of IAM privileges, we've been able to achieve strong application isolation without hindering development velocity for a number of Formidable's clients.

We are thrilled to bring this work to the community with the terraform-aws-serverless project, providing battle-tested and locked-down privileges for Serverless Framework applications running within a single AWS account. We have scrutinized all applicable IAM resources in a Serverless Framework deployment, andfully documented exactly how and why each resource limitation / allowance exists in the project.

How to integrate terraform-aws-serverless

The high-level integration process starts with an AWS superuser who creates a support cloud infrastructure that can assign limited privileges to IAM users for developers and automation contexts. The IAM users can then appropriately deploy, modify, and introspect Serverless Framework applications. As the terraform-aws-serverless project is a collection of Terraform modules, you will need some familiarity with setting up Terraform stacks.

A good first stop in your integration journey is to review our sample reference application that integrates everything we will discuss today into a fully production-ready "hello world" Lambda application. In the remainder of this post we will work through an abbreviated example in the following steps:

  1. Configure your Terraform and Serverless Application projects.
  2. Provision the Terraform stack with an AWS superuser.
  3. Attach IAM groups to appropriate IAM users for the Serverless Framework project.
  4. Deploy the Serverless Framework and perform any other lifecycle commands.

Aside: Why Terraform?: Terraform is a bit of learning/technology burden over and above the normal expectations for a Serverless Framework application in AWS. In fact, our original version of the project was written in CloudFormation. However, we decided to port our work to Terraform for a few reasons, the main one being that Terraform modules/submodules allow easy packaging of distinct features and scenarios. Additionally, we have found Terraform stacks to be a bit more maintainable and extensible in practice over the lifetime of our client projects, and there is a large, vibrant open source ecosystem of Terraform modules to help kickstart your specific infrastructure project. All that said, if a strong community desire for a CloudFormation version of the project emerges, we could investigate releasing that as well.


The purpose of the terraform-aws-serverless project is to support Serverless Framework applications, so configuration is mostly governed by the Serverless Framework. Thus, we'll start configuring our Serverless Framework project with the two relevant naming choices:

  • service: The project service field, usually something like a single word or phrase. One best practice for pairing with the Terraform module is to prefix the service name with sls- in serverless.yml. This allows an AWS administrator viewing cloud resources to easily distinguish between those created by the Serverless Framework from those created by Terraform. And, since we love weird, yet oddly cute tapirs, let's put this all together and name our Serverless Framework application sls-tapir.
  • stage: The provider.stage field in serverless.yml. This is an arbitrary, user-defined list of different deployment target names in AWS. Our example will configure a reasonably typical list of development, staging, and production. For this exercise, we'll deploy to the development environment.

Let's turn these choices into a serverless.yml configuration:

# **`terraform-aws-serverless` integration note**: # Should be `sls-` + `service_name` parameter *or* # be specified directly as param `sls_service_name`. service: sls-tapir provider: name: aws runtime: nodejs8.10 region: "us-east-1" stage: ${opt:stage, "development"} functions: function1: # ... function2: # ...

Now, let's configure a Terraform stack to support this Serverless Framework project:

# variables.tf variable "stage" { description = "The stage/environment to deploy to." default = "development" } # main.tf provider "aws" { region = "us-east-1" } # Core `serverless` IAM support. module "serverless" { source = "FormidableLabs/serverless/aws" region = "us-east-1" service_name = "tapir" stage = "${var.stage}" # (Default values) # iam_region = `*` # iam_partition = `*` # iam_account_id = `AWS_CALLER account` # tf_service_name = `tf-SERVICE_NAME` # sls_service_name = `sls-SERVICE_NAME` }

Our configuration parameter service_name = tapir will produce AWS resources prefixed by default with tf-tapir and infers a Serverless Framework service field name of sls-tapir for assigning IAM privileges, which matches what we configured above in serverless.yml. (All service name-related fields can be customized, if desired.) The stage parameter corresponds directly to the provider.stage field in serverless.yml. The remaining input parameters are documented further in the project's integration notes.

How to provision the Terraform stack

Now that we've ironed out our configurations, it's time to provision the Terraform support stack with an AWS superuser (or user with a lot of IAM privileges). Let's assume that an AWS user user-super has these permissions and we've configured a similarly named AWS profile, so we can use it in Terraform and Serverless via the AWS_PROFILE environment variable.

To create the Terraform support stack, we follow the usual Terraform sequence of init and apply:

$ AWS_PROFILE=user-super \ terraform init -var stage=development $ AWS_PROFILE=user-super \ terraform apply -var stage=development

... and we have our support stack!

Attaching groups to AWS IAM users

The Terraform stack outputs three IAM groups that can be attached to IAM users in our AWS account. Notice below that each group name corresponds to the level of isolation of service/Serverless Framework project, stage, and then our customized "role":

  • Administrator (tf-tapir-development-admin): AWS users assigned to the admin group can create/update/delete a Serverless application and do pretty much anything that the Serverless Framework permits out of the box.
  • Developer (tf-tapir-development-developer): AWS users can update a Serverless application and do other things like view logs, perform rollbacks, etc.
  • Continuous Integration (CI) (tf-tapir-development-ci); Presently the same privileges as the developer group, intended for automation.

The division into these three groups is an opinionated creation of the terraform-aws-serverless project. In practice, we have found that many projects don't typically require a developer or CI to be able to create / delete Serverless Framework resources, and thus we can get an extra level of security from segregating those privileges. This further allows us to put nearly all of the elevated AWS IAM permissions that only support a wildcard ("*") into the administrator group.

On the other hand, many Serverless Framework projects do have developers or automation creating new AWS resources quite often. In such a scenario, the appropriate solution may be to route everything through the admin IAM group. This at least provides security isolated to the Serverless Framework project and stage.

Continuing with our example, let's assume we end up with the following users:

  • user-admin assigned to tf-tapir-development-admin group
  • user-developer assigned to tf-tapir-development-developer group

With these users, we're now ready to deploy the Serverless Framework application!

How to create the Serverless Framework application as an admin

An IAM admin user can perform all actions offered by the serverless CLI tool. A typical starting point is creating a Serverless Framework application with the first deploy, which then creates the necessary CloudFormation resources in addition to deploying the code to Lambda:

$ AWS_PROFILE=user-admin \ serverless deploy --stage development

After this first deploy, assuming no underlying AWS resources change in the Serverless Framework application, a developer/CI user can perform additional deploys.

Some other commands that are limited to the admin group include the following:

  • serverless remove: Delete the Serverless Framework application and AWS resources.
  • serverless metrics: View application cloud metrics. It's worth noting that the rationale for this action being in the admin group is that the underlying IAM resource must be a wildcard due to AWS limitations and can see information for other unrelated stages and applications.

How to develop everything else as a developer/CI

After the Serverless Framework application is deployed by an admin, users assigned to the developer / ci IAM groups can perform most of the other Serverless CLI commands, including the same deploy command for an existing Serverless Framework application:

$ AWS_PROFILE=user-developer \ serverless deploy --stage development

Some other useful commands:

  • serverless info: View deployment information including service endpoints.
  • serverless logs --function {FN_NAME}: View logs for a deployed function.
  • serverless rollback: View available states to rollback a deploy to and perform the rollback.

Automating deployments with CI

The ci IAM group is intended for use in a CI/CD process, such that a CI-targeted AWS IAM user can deploy the Serverless Framework application at various stages. In Formidable's client work, we have set up successful CI/CD systems supporting scenarios like branch merges triggering staged deploys all the way to production automatically!

As mentioned above, if you expect that normal Serverless Framework project development will need to create or delete AWS resources (e.g., regularly adding new functions) then you may be better off using the admin IAM group for CI instead of ci. If such an occurrence is relatively rare, you can likely get away with the ci group in automation backed by a human admin performing some occasional, privileged manual commands before automation kicks in.

Customizations, submodules

The terraform-aws-serverless core IAM module should get most Serverless Framework applications off the ground. But, as applications grow, chances are you will need more functionality from the AWS ecosystem. This usually means bespoke Terraform extensions and/or integrating additional modules from the Terraform registry (both of which should be straightforward alongside the terraform-aws-serverless project).

For the Serverless Framework specifically, there are some enhancement scenarios that are so common that the terraform-aws-serverless project supports or will support them directly via Terraform submodules! We presently have one implemented submodule:

  • X-ray: Brings IAM support for the fantastic AWS X-ray performance tracing tool to your Lambdas. After the submodule is deployed, you can enable X-ray support in your serverless.yml configuration or via a Serverless plugin.

We have an open source roadmap for additional terraform-aws-serverless submodules (most of which we've already implemented internally):

  • VPC: Create an AWS VPC suitable for Serverless Framework integration for functions in VPC with 2 private subnets, 2 public subnets, a NAT gateway, and hot failover support.
  • KMS: Create a AWS KMS key for service + stage specific encryption / decryption. Lambda execution role can decrypt at runtime.
  • SecretsManager: Create and manage secrets tied to service + stage using AWS Secrets Manager. Lambda execution role can decrypt at runtime.
  • CloudWatch Dashboards / Alarms: Create alarms and metrics in CloudWatch Dashboards customized to each service + stage. Tunable thresholds for alarms.

Finally, we are interested in any additional submodule suggestions from the community for better Serverless Framework support!


The terraform-aws-serverless project has helped us bring role, stage, and service isolation to our AWS Serverless Framework applications. We hope that the project helps your team enhance the security and maintainability of your cloud infrastructure and Serverless applications as well. We encourage you to give it a try and share how well the project is working! Any comments or suggestions are most welcome to help us improve and focus our future development efforts.

_Many thanks to the following folks for their invaluable post feedback: Alex DeBrie, Ian Walker-Sperber, Kevin Stephens, Tyler Thompson, and Amy Dickson _

Related Posts

Investing in Your Technology and Team But Still Not Seeing the Desired Results?

September 9, 2021
Where do you and your company fall on the digital maturity scale? And why does it matter?

trace-pkg: Package Node.js apps for AWS Lambda and beyond

December 15, 2020
Packaging Node.js applications for the cloud can be slow and tedious. We introduce trace-pkg, a general purpose tool to quickly and efficiently package up your application code and dependencies for deployment in AWS Lambda and beyond!

Say Hello to Charlie

October 30, 2020
Charlie is a full-stack engineer with a broad range of experience leading large and small teams. Today he explains what he does at Formidable and why he stays.