You recently had both the great idea of not deploying your work in progress changes into production and test your service using real AWS services (instead of trying to mock DynamoDB, Kinesis & S3), but you just realized it would mean one or both of two things.

Either you'll have to use another account for tests (not a bad practice), or you'll have to deploy your resources using different names so they don't collide with production (just like the Serverless framework does with our lambda functions).

Multiple AWS accounts

key lot
Your keychain when you have thousands of AWS Accounts, just like Netflix - Photo by Chunlea Ju / Unsplash

I'll quickly discard the first idea. Having a separate AWS account for production and another for dev / continuous integration is a great idea (Netflix even built a tool to, amongst other things, do that), especially when using services such as Kinesis, DynamoDB and Lambda.

The safest and smartest bet in making sure your work in progress doesn't mess up your customers' data

But I'd rather not have more than two or three accounts (production, staging, dev), and I want my tests to be both reliable and predictable. Which has less chances to happen when my databases may countain random stuff leftover from some previous tests, or multiple branches' triggered my CI test and collide in some way, or wipe each other's data.

So, what would I do? Quite simple: dynamize my DynamoDB tables' and my Kinesis Streams' names, run the tests in one of the three accounts depending on the stage (prod, staging & dev), and remember to tear them down afterwards (that last part is important. You don't want thousands of tables lying around).

For the purpose of this demo we'll be using a single aws account, and basic vanilla Javascript. You'll find the setup code for this post here. Let's get to it!

Dynamizing the DynamoDB table resource's name in serverless.yml

As I wrote previously, we want to dynamize our table's name based on something that will change and have a low collision chance, like... your branche's name! If you are following a convention similar to gitflow, it'll be pretty unique and shouldn't have many people working on the same branch at the same time, dramatically reducing collision risks.

That why we chose to use the branch's name and pass to the serverless framework on deployment using the --stage option (ie serverless deploy --stage $YOUR_CI_ENV_VAR_FOR_BRANCH_NAME) in our CI.

First, let's make sure we have a default stage variable, by adding the stage property to our provider in the serverless.yml, like so:

provider:
    name: aws
    runtime: nodejs8.10
    stage: ${opt:stage, 'dev'}
    iamRoleStatements:
        - Effect: Allow
          Action:
            - dynamodb:DescribeTable
            - dynamodb:GetItem
            - dynamodb:PutItem
          Resource:
            - "Fn::GetAtt": [ DemoTestTable, Arn ]

This will make sure that, in the event we don't pass the stage option to the cli, there will be a default value (dev).

Next, variabilize the table's name in the resources part of the serverless.yml file. We do this by calling the provider's stage property

DemoTestTable:
      Type: 'AWS::DynamoDB::Table'
      DeletionPolicy: 'Delete'
      Properties:
        AttributeDefinitions:
          - AttributeName: hashKey
            AttributeType: S
          - AttributeName: sortKey
            AttributeType: S
        KeySchema:
          - AttributeName: hashKey
            KeyType: HASH
          - AttributeName: sortKey
            KeyType: RANGE
        BillingMode: PAY_PER_REQUEST
        TableName: Demo-${self:provider.stage}-Table

Here! Our table's name is dynamized! Deploy it if you want, using sls deploy --stage whateverStageYouWant, and see by yourself! That's it, done! Sit back, relax, and congratulations!

End sign on beige sand
The end! - Photo by Matt Botsford / Unsplash

Just kidding. Our function still has no idea what the stage is, so it won't be able to figure out the table's name, and it will just sadly fail on the first query.

Passing the stage to your function

Let's add an environment property to our provider, referencing both the stage and the region. Why? The serverless framework adds provider environment properties to every single function defined in the project. This way, every one of your functions will be able to access these properties through it's environment variables.

provider:
  name: aws
  runtime: nodejs8.10
  stage: ${opt:stage, 'dev'}
  iamRoleStatements:
      - Effect: Allow
        Action:
          - dynamodb:DescribeTable
          - dynamodb:GetItem
          - dynamodb:PutItem
        Resource:
          - "Fn::GetAtt": [ DemoTestTable, Arn ]
  environment:
      REGION: ${self:provider.region}
      STAGE: ${self:provider.stage}

Now, edit the function so it gets the right table by getting the stage from the environment, and templating the table's name

const stage = process.env.STAGE;
const tableName = `Demo-${stage}-Table`;

And now your function should be able to post to and read from the right table. Want to give a try? Deploy, run a few queries via your favorite rest client, and don't forget to remove the stack afterwards.

Bonus: Dynamize your tables' retention policy based on the stage! (yup)

One last thing. DynamoDB tables have a Deletion Policy, determining what happens on sls remove. Typically, you'll want to automatically remove your features' tables after running your tests, but you definitely won't want to automatically delete your production tables, no matter what.

Let's automate that.

First, we'll add two custom properties: dynamoDBDeletePolicies and dynamoDBDeletePolicy. The first one defines what policies are available ( Retain or Delete), the second one determines which one will be used depending the stage, like so:

custom:
  dynamoDBDeletePolicies:
    prod: Retain
    staging: Retain
    dev: Delete
    other: Delete
  dynamoDBDeletePolicy: ${self:custom.dynamoDBDeletePolicies.${self:provider.stage}, self:custom.dynamoDBDeletePolicies.other}

Now, update the table's deletion policy to reference the dynamoDBDeletePolicy variable

resources:
  Resources:
    DemoTestTable:
      Type: 'AWS::DynamoDB::Table'
      DeletionPolicy: ${self:custom.dynamoDBDeletePolicy}
      Properties:
        AttributeDefinitions:
          - AttributeName: hashKey
            AttributeType: S
          - AttributeName: sortKey
            AttributeType: S
        KeySchema:
          - AttributeName: hashKey
            KeyType: HASH
          - AttributeName: sortKey
            KeyType: RANGE
        BillingMode: PAY_PER_REQUEST
        TableName: Demo-${self:provider.stage}-Table

This way, you get fine-grained control over what should happen for every "known" stage (production, staging, integration, whatever), and a default policy for the rest. In our case, prod and staging tables won't ever automatically be removed with the stack, while dev and every other stage's tables will!

Testing

Deploy it in production, using  sls deploy --stage prod, wait for it to deploy, and remove it (sls remove --stage prod). Wait for the stack deletion to complete, check your console. The table's still here! (quick note: you might run into some resource conflicts if you redeploy the project in production).

Deploy it in blabla stage, using `sls deploy --stage blabla, remove it ( sls remove --stage blabla). Wait for the stack deletion to finish, check your console... the "Demo-babla-Table" is gone!

Final notes

I encountered a recurring issue with indentation while writing and deploying this example. So if you run into any issue, try converting your serverless.yml's indentation to spaces. It will solve a lot of things. I guess that's the YAML life.

That being said, clean up any retained table, make sure there aren't any Lambda Functions or API Gateway leftover, and that's it! That's all there is to it! Have fun dynamizing other properties and making your life simpler, one automation at a time!

gray and orange plastic robot toy
BEEP BOOP! AUTOMATE. ALL. THE THINGS. BEEEP! - Photo by Rock'n Roll Monkey / Unsplash