2018. június 30.

Stack management with AWS CloudFormation

Tóth Ákos
Cloud Engineer

As a SaaS engineer in 2018, it almost seems like an impossible to work with scenario compared to the comforts of getting exactly what you need in the cloud. This blog post aims to demonstrate a relatively tiny subset of the different kinds of things you can use with CloudFormation, the automated provisioning tool for Amazon Web Services.


Almost two decades ago, web APIs were generally traditional websites that exposed a series of endpoints through path manipulation, piggybacking the database of the content management system behind the scenes. The few exceptions to this rule lived in a physical stack of machines in self-maintained or rented datacenters. In the early 2000s, the concept of cloud computing became a more central focus for both datacenter owners and software engineers - and looking back now, who wouldn't want to place their application in the cloud? It's certainly an attractive alternative to maintaining your own machines, dedicating personnel to that task, and dealing with the efficient management of the resources you require versus the resources you actually use.

As a SaaS engineer in 2018, it almost seems like an impossible to work with scenario compared to the comforts of getting exactly what you need in the cloud. This blog post aims to demonstrate a relatively tiny subset of the different kinds of things you can use with CloudFormation, the automated provisioning tool for Amazon Web Services.

The anatomy of a software (as a service)

In the simplest example of a web service, we expose a web interface that receives HTTP requests, performs real-time data operations, and returns an HTTP response. Whether or not a service aims to perform a more complicated operation, this is a starting point that is nearly universally required.


Of course, a service of such a simple architecture doesn't necessarily warrant cloud-based development - unless our aim is to handle a large volume of traffic, our web application could just as easily be a CMS, such as Drupal, deployed on a space rented from a hosting provider of our choice; but compared to a standard hosting provider, cloud already has a large advantage: instead of paying a monthly fee for fixed bandwidth, storage space, a varying degree of configurational freedom, and a very strict restriction on the type of application that can be exposed, you can rent a machine where all these parameters are up to your imagination.

Another consideration is the expansion of features: at one point, a single machine, a single application, PHP itself, your hosting parameters, all of these things can become a bottleneck. Let's consider an advanced anatomy that is prepared to deal with a large volume of traffic and long-running tasks.


Despite the increase in complexity, this is one of the simplest architectures for an application running in the cloud. We use load balancing to delegate request processing to one of many identical API instances for processing. Tasks that are expected to produce an output important to the end user within a limited timeframe are performed in the API itself and their output is returned as a response. A straightforward example for this would be a retrieve operation for a CRUD service; beyond basic calculations such as authenticating the requester, the retrieve operation is expected to do no more than a single read from the persistent storage, and the reception of the output is the main focus for the user when they send this request, thus it is an operation performed in the API.

A create operation in the same service, however, can be considered a long-running task with asynchronous feedback: there may be several database operations to check whether creating an object is a legal operation, there may be checks performed on the object against a JSON schema or an Elasticsearch object mapping. The user also does not expect time-critical output, but rather a piece of feedback, whether or not the operation succeeded. We may not need to perform the operation synchronously within the API; we can delegate task processing to a worker for better overall performance under a large volume of requests over time.

Asynchronous feedback in this case is sent in the form of an internal message - through a websocket or by a simple write to a common database - if we provide the client ourselves (for example, Github provides its own client application for interfacing with its backend, and the result of the operation is also displayed on their own interface), or in the form of a webhook (an HTTP callback to a specified URL) to an external client application or interface.

This is getting quite difficult to model manually, however. You'd need to operate your own load balancing service, your own queue, your own persistent storage. Even if we can avoid using custom code and have deployed applications do it for us - RabbitMQ for a queue service and MySQL for a persistent storage, for example - we can still outsource the maintenance of these services by using cloud components instead.

Modelling the anatomy in the cloud

Let's take our previous anatomy diagram and replace the planned components with AWS services.


Today, AWS provides over 70 different components that you can use to build your cloud service. These microservices cover a wide enough range of the basic building blocks of an application that you might often find that you don't need to use custom code running on an EC2 instance for anything that you may want to accomplish. The above translation of the model to AWS services still includes two groups of instances that run code on EC2 - the API and the worker - but you also have the option to use AWS API Gateway as your request handler, and you could opt to use a scheduled AWS Lambda as a way to spawn worker threads. A specialized use of AWS components allows for a completely serverless architecture for your cloud application, however, that is not covered in this blog post.

Automated provisioning

An apparent downside to the model presented above is the amount of work that needs to go into manually setting it all up from the AWS interface, making sure everything is properly configured, and then repeating the process if a development or staging version of the application is required. AWS CloudFormation is an automated provisioning service that lets us define a stack template, allowing us to go through this process repeatably with a single click. In this section, we'll examine how each component can be created through CloudFormation.

A CloudFormation template is a YAML file that describes what resources we require, what parameters the stack takes and what outputs the stack should have. As JSON is a strict subset of YAML, the template may also be defined in a JSON format. Parameters essentially define input fields that you will see when provisioning the stack from the interface; they are also values you can pass when invoking a stack provisioning through the API. These parameters can then be referenced from within resource definitions. Outputs of the stack are strings displayed in a different tab. They can also be queried separately from the stack description through the AWS API.

The basic CloudFormation template follows this structure:

  1. AWSTemplateFormatVersion: "2010-05-09"
  2. Parameters:
  3.     ParameterName:
  4.         Type: DataType
  5.         AllowedValues:
  6.         - Dropdown selection 1
  7.         - Dropdown selection 2
  8.         Description: This field describes what the user will see as a description text when provisioning the stack from the interface.
  9.         Default: Value
  10. Resources:
  11.     ResourceName:
  12.         Type: AWS::Service::Resource
  13.         DependsOn:
  14.         - OtherResource
  15.         Properties:
  16.             PropertyName: PropertyValue
  17.             SecondPropertyName: SecondPropertyValue
  18. Mappings:
  19.     MapName:
  20.         MapMainKey1:
  21.             MapSecondaryKey1: MapValue1
  22.             MapSecondaryKey2: MapValue2
  23.         MapMainKey2:
  24.             MapSecondaryKey1: MapValue3
  25.             MapSecondaryKey2: MapValue4 

Private clouds

Before we set up the components we planned above, there's a few other things we should consider for our stack. Adding a VPC allows our services to exist in their own local network. It allows us to have free communication within the private cloud of the services, but to only have specific entry points into the cloud from external applications - which in turn hardens the security of our microservices which are not supposed to be exposed to the rest of the internet. Let's start by defining such a VPC for our CloudFormation stack. Each of the resources defined below are an entry in the Resources object.

  1. VPC:                              # This is the internal resource identifier, an arbitrary string.
  2.     Type: AWS::EC2::VPC           # VPCs are of the type AWS::EC2::VPC. All AWS resources are in the form of AWS::<Category>::<Resource>
  3.     Properties:
  4.         CidrBlock:   # CidrBlock defines the IP range for the VPC.
  5.         Tags:                     # Tags allow us to assign arbitrary key-value pairs to most AWS resources.
  6.         - Key: Name
  7.           Value: ExampleVPC

For our private network's components to be reachable from the outside, we also define an internet gateway. This serves as the entry point into the VPC. We also use a VPC Gateway Attachment to assign this internet gateway to the VPC.

  1. InternetGateway:
  2.     Type: AWS::EC2::InternetGateway
  3.     Properties:
  4.         Tags:
  5.         - Key: Name
  6.           Value: ExampleGateway
  1. VPCGatewayAttachment:
  2.     Type: AWS::EC2::VPCGatewayAttachment
  3.     Properties:
  4.         InternetGatewayId: !Ref InternetGateway     # !Ref is an AWS function that gets a reference to a different resource by internal ID.
  5.         VpcId: !Ref VPC                             # The JSON syntax for this function is { Ref: "InternalResourceName" }
  6.                                                     # The return value of Ref is different for each resouce type. Refer to the documentation for more information.

We will return to this VPC after setting up the rest of the infrastructure.

Assembling the service


CloudFormation doesn't impose any restrictions on the order of the components in the resources section; so we'll just assemble what we see on the diagram in a dependent order. First, let's consider our SQS queue - as the queue is a self-contained component with no awareness of what will put data into it or take data out, it is something other resources will depend on, rather than something that depends on other resources.

  1. SQSQueue:
  2.     Type: AWS::SQS::Queue
  3.     Properties:
  4.         QueueName: !Join ["-", [!Ref "AWS::StackName", "task-queue"]]

The Join function lets you join strings with an arbitrary delimiter. The JSON syntax for the function is

{ "Fn::Join": ["delimiter", ["part1", "part2", ...]]}

The AWS::StackName is a special referenceable constant. It returns the user-provided stack name when provisioning from the interface. When specifying non-tag names for components, always use a reference to the stack name for building the name, so that multiple stacks built from the same template don't conflict.


The remaining components are autoscaling groups, a load balancer and an RDS database.

The load balancer depends on an autoscaling group as a target, so we'll postpone that one until everything else is ready. The autoscaling groups and the RDS database are both a group of servers, and servers in a VPC must be attached to a subnet. These subnets assign internal IP ranges and availability zones to instances or groups of instances.

It is recommended to operate with at least 3 availability zones (AZ) to ensure that your service is available even during an outage in one of the zones - although some regions may only have a total of 2 AZs available. Availability zones follow the pattern of [region][zone identifier], where the zone identifier is a single lowercase letter. In most - but not all - of the regions, zone identifier a and b should be available. An example of an availability zone would thus be us-east-1a or eu-west-1b. The AWS EC2 documentation will always contain an up-to-date list of availability zones.

The example stack that is assembled by this post has been tested with the us-east-1 region, but should work with most of the regions as long as a valid AMI is specified for that region for the launch configurations below. 

  1. SubnetAPIZoneA:
  2.     Type: AWS::EC2::Subnet
  3.     Properties:
  4.         AvailabilityZone: !Join ["", [!Ref "AWS::Region", "a"]]  # Like AWS::StackName, AWS::Region is a referencable constant. This will construct the string
  5.                                                                  # "us-east-1a" in the region us-east-1.
  6.         CidrBlock:                                  # The IP range to assign to the subnet. For this subnet, we use -
  7.                                                                  # The IP range must be a subset of the range defined for the VPC.
  8.         VpcId: !Ref VPC                                          # Assigns this subnet as a subnet within the VPC defined above.
  1. SubnetAPIZoneB:
  2.     Type: AWS::EC2::Subnet
  3.     Properties:
  4.         AvailabilityZone: !Join ["", [!Ref "AWS::Region", "b"]]  # us-east-1b
  5.         CidrBlock:                                 # -
  6.         VpcId: !Ref VPC
  1. SubnetWorkerZoneA:
  2.     Type: AWS::EC2::Subnet
  3.     Properties:
  4.         AvailabilityZone: !Join ["", [!Ref "AWS::Region", "a"]]  # us-east-1a
  5.         CidrBlock:                                  # -
  6.         VpcId: !Ref VPC
  1. SubnetWorkerZoneB:
  2.     Type: AWS::EC2::Subnet
  3.     Properties:
  4.         AvailabilityZone: !Join ["", [!Ref "AWS::Region", "b"]]  # us-east-1b
  5.         CidrBlock:                                 # -
  6.         VpcId: !Ref VPC
  1. SubnetDatabaseZoneA:
  2.     Type: AWS::EC2::Subnet
  3.     Properties:
  4.         AvailabilityZone: !Join ["", [!Ref "AWS::Region", "a"]]  # us-east-1a
  5.         CidrBlock:                                  # -
  6.         VpcId: !Ref VPC
  1. SubnetDatabaseZoneB:
  2.     Type: AWS::EC2::Subnet
  3.     Properties:
  4.         AvailabilityZone: !Join ["", [!Ref "AWS::Region", "b"]]  # us-east-1b
  5.         CidrBlock:                                 # -
  6.         VpcId: !Ref VPC

For the autoscaling groups, we are done; however, for the RDS database, the subnets must be assigned to a special group resource.

  1. SubnetGroupDatabase:
  2.     Type: AWS::RDS::DBSubnetGroup
  3.     Properties:
  4.         DBSubnetGroupDescription: MySQL Database subnet grouping.  # Arbitrary
  5.         SubnetIds:
  6.         - !Ref SubnetDatabaseZoneA
  7.         - !Ref SubnetDatabaseZoneB
RDS security groups

A dependency for servers with listen services is security groups. Security groups specify the IP ranges from which the instance accepts connections and which ports are open on the incoming side. The groups also control the range of addresses that the instance can access and through which ports it can generate outgoing traffic. For now, we'll only set up the security group for the database.

  1. SecurityGroupDatabase:
  2.     Type: AWS::EC2::SecurityGroup
  3.     Properties:
  4.         GroupDescription: !Join ["-", [!Ref "AWS::StackName", "database"]]  # Despite being named description, this is more of an ID for the group.
  5.         SecurityGroupEgress:                                                # Outgoing traffic rules.
  6.         - CidrIp:                                                 # Allow traffic anywhere.
  7.           FromPort: "-1"
  8.           ToPort: "-1"                                                      # Through any port.
  9.           IpProtocol: "-1"                                                  # On any IP protocol.
  10.         SecurityGroupIngress:                                               # Incoming traffic rules.
  11.         - CidrIp: !GetAtt ["VPC", "CidrBlock"]                              # Retrieves the CidrBlock attribute of the VPC resource. Not all attributes are exposed
  12.                                                                             # to GetAtt. JSON syntax is { "Fn::GetAtt": ["Resource", "Attribute"] }.
  13.                                                                             # This essentially means that traffic is allowed from the IP range of the VPC, and
  14.                                                                             # transitively, from all servers within the VPC.
  15.           FromPort: "3306"
  16.           ToPort: "3306"                                                    # Specifies the port range 3306-3306 (equivalent to port 3306).
  17.           IpProtocol: "tcp"                                                 # Altogether, this rule specifies that we allow TCP traffic on port 3306 from the VPC
  18.         VpcId: !Ref VPC

This security group allows all outgoing traffic from the database instance, but only allows incoming traffic from computers within the VPC.

RDS database

We also require a password for our database. This should be a secure parameter of the stack. The following snippet goes under the Parameters section of the stack YAML.

  1. DBPassword:
  2.     Type: String
  3.     NoEcho: true                     # Specifies that this string should never be printed on the interface or through the API.
  4.     AllowedPattern: ^[a-zA-Z0-9_]*$  # Provided strings must match this regex.
  5.     MinLength: 8                     # RDS passwords must be at least 8 characters long.
  6.     MaxLength: 32
  7.     Description: RDS root password.

Now we have everything required to set up the database, which we will provision through RDS. RDS is an AWS service which provides a chosen one of several types of relational databases (MySQL, PostgreSQL, etc.) and operates it as a microservice.

  1. Database:
  2.     Type: AWS::RDS::DBInstance
  3.     Properties:
  4.         AllocatedStorage: "10"                                   # The allocated storage space for the data in gigabytes.
  5.         DBInstanceClass: db.m4.large                             # The class to use for this instance. Instance classes come in several sizes with different
  6.                                                                  # optimization focuses - for example, the burst-capable t2 instances or the memory optimized r3s.
  7.                                                                  # In our case, we use a third generation standard instance.
  8.         DBSubnetGroupName: !Ref SubnetGroupDatabase              # Reference to the subnet group we created above.
  9.         Engine: MySQL                                            # The database engine to use. Options include aurora, mariadb, mysql and postgres.
  10.         EngineVersion: "5.7.17"                                  # The version of the database engine - refer to the documentation for the available versions.
  11.         MasterUserPassword: !Ref DBPassword                      # Use the password parameter that we set up above.
  12.         MasterUsername: admin                                    # Username for the root account.
  13.         MultiAZ: true                                            # Since we're using a subnet group with multiple AZs, we set this to true.
  14.         VPCSecurityGroups:                                       # Attaches security groups to this RDS instance.
  15.         - !Ref SecurityGroupDatabase
EC2 security groups

Let's also set up our autoscaling groups for the API and the worker instances. The autoscaling groups automatically spawn the desired amount of identical EC2 instances when created. For each autoscaling group, we require a launch configuration that describes what these identical instances should look like. Load balancers are also referred to from the autoscaling group, so those must be created beforehand. Finally, each instance configured by a launch configuration, and each load balancer as a listen service also can be attached a security group.

From the top down, we should start by defining the security group for the API and the worker instances and the API load balancer.

  1. SecurityGroupAPILoadBalancer:      # The API's load balancer should be exposed to the world as our API endpoint and should be able to forward traffic to the VPC.
  2.     Type: AWS::EC2::SecurityGroup
  3.     Properties:
  4.         GroupDescription: !Join ["-", [!Ref "AWS::StackName", "api-elb"]]
  5.         SecurityGroupEgress:       # As load balancers run no custom code, we don't care where it can forward traffic and allow it to go anywhere.
  6.         - CidrIp:
  7.           FromPort: "-1"
  8.           IpProtocol: "-1"
  9.           ToPort: "-1"
  10.         SecurityGroupIngress:      # We allow HTTP traffic from any IP address.
  11.         - CidrIp:
  12.           FromPort: "80"
  13.           IpProtocol: tcp
  14.           ToPort: "80"
  15.         VpcId: !Ref VPC
  1. SecurityGroupAPI:                  # The API instances should be able to listen to HTTP requests from the ELB and SSH requests from developers.
  2.     Type: AWS::EC2::SecurityGroup
  3.     Properties:
  4.         GroupDescription: !Join ["-", [!Ref "AWS::StackName", "api-elb"]]
  5.         SecurityGroupEgress:         # We could theoretically restrict outgoing traffic to within the ELB, but this is entirely up to what our app requires.
  6.         - CidrIp:
  7.           FromPort: "-1"
  8.           IpProtocol: "-1"
  9.           ToPort: "-1"
  10.         SecurityGroupIngress:
  11.         - CidrIp:          # This first rule specifies that we accept SSH requests (TCP port 22) from any IP address.
  12.           FromPort: "22"
  13.           IpProtocol: tcp
  14.           ToPort: "22"
  15.         - FromPort: "3000"           # This second rule specifies that we accept requests on port 3000 from the load balancer's security group.
  16.           ToPort: "3000"             # We do this on port 3000 because the ELB has the ability to change the port of an incoming request.
  17.           IpProtocol: tcp            # Later, we'll see that the ELB picks up requests on port 80 and forwards them on port 3000.
  18.           SourceSecurityGroupId: !Ref SecurityGroupAPILoadBalancer
  19.         VpcId: !Ref VPC
  1. SecurityGroupWorker:                 # The worker instances have no exposed API, so we only care about the ability to SSH in.
  2.     Type: AWS::EC2::SecurityGroup
  3.     Properties:
  4.         GroupDescription: !Join ["-", [!Ref "AWS::StackName", "api-elb"]]
  5.         SecurityGroupEgress:
  6.         - CidrIp:
  7.           FromPort: "-1"
  8.           IpProtocol: "-1"
  9.           ToPort: "-1"
  10.         SecurityGroupIngress:
  11.         - CidrIp:          # This first rule specifies that we accept SSH requests (TCP port 22) from any IP address.
  12.           FromPort: "22"
  13.           IpProtocol: tcp
  14.           ToPort: "22"
  15.         VpcId: !Ref VPC
Elastic load balancing

Next, let's set up the load balancer that will handle the distribution of the API's traffic among the instances in the autoscaling group.

  1. LoadBalancerAPI:
  2.     Type: AWS::ElasticLoadBalancing::LoadBalancer
  3.     Properties:
  4.         CrossZone: true               # Specifies that the target instances are in multiple availability zones.
  5.         HealthCheck:                  # Defines how the ELB determines if an instance is healthy and may receive requests.
  6.             HealthyThreshold: "5"     # An unhealthy instance must pass this many health checks in a row to be considered healthy again.
  7.             Interval: "15"            # The time (in seconds) between two health checks.
  8.             Target: HTTP:3000/ping    # The protocol, port (and for HTTP, the path) to send a request to. For HTTP, a 200 status code indicates a healthy response.
  9.                                       # This specific target specifies that an HTTP request is sent to port 3000 on path /ping on each instance.
  10.             Timeout: "8"              # Time to wait for response - timeouts are considered unhealthy.
  11.             UnhealthyThreshold: "2"   # A healthy instance must fail this many health checks in a row to be considered unhealthy.
  12.         Listeners:                    # Specifies a forwarding rule.
  13.         - LoadBalancerPort: "80"  # The load balancer listens on port 80;
  14.           Protocol: HTTP          # uses the HTTP transport protocol for forwarding;
  15.           InstancePort: "3000"    # and sends it to port 3000 of one of the healthy instances under the ELB.
  16.         LoadBalancerName: !Join ["-", [!Ref "AWS::StackName", "api"]]
  17.         SecurityGroups:               # Here we assign the security group that we created above to the ELB.
  18.         - !Ref SecurityGroupAPILoadBalancer
  19.         Subnets:                      # And the subnets as well.
  20.         - !Ref SubnetAPIZoneA
  21.         - !Ref SubnetAPIZoneB
Instance templating with launch configurations

And the final piece before we get to setting up the instances' autoscaling group: we set up the launch configurations for the instance templates themselves.

  1. LaunchConfigurationAPI:
  2.     Type: AWS::AutoScaling::LaunchConfiguration
  3.     DependsOn:
  4.     - SecurityGroupAPI                   # We set this explicit dependency to enforce security groups to be created before the launch configuration is created.
  5.     Properties:
  6.         AssociatePublicIpAddress: true   # When set, an instance is assigned a public IP address as well as a private one. Required for access from outside the VPC.
  7.         BlockDeviceMappings:             # Associates storage devices with the instances.
  8.         - DeviceName: "/dev/sda1"        # The device will be available at this path for mounting.
  9.           Ebs:                           # Specifies that this should be an Elastic Block Storage volume.
  10.               VolumeSize: "10"           # The storage space on the device, in gigabytes.
  11.               VolumeType: gp2            # Specifies the block device type - gp2 is a general-purpose SSD that should suffice for most instances.
  12.         ImageId: "ami-#########"         # Specifies the AMI (image) that will be used as the base for these instances. This is intentionally left
  13.                                          # as a placeholder for now, as AMIs will be discussed below.
  14.         InstanceType: "t2.micro"         # The EC2 instance type to provision. As with database instances, several sizes and purposes are available.
  15.         KeyName: ""                      # AWS allows you to assign a keypair to the instance. It is recommended that you create a separate keypair
  16.                                          # outside of the CloudFormation stack, download the private key, and specify that as your keyname. You must
  17.                                          # provide a valid value to this property if you wish to access your instances through SSH.
  18.                                          # Due to the nature of keypairs (private keys are only available for download on creation), they are not available
  19.                                          # as CloudFormation resources.
  20.         SecurityGroups:                  # Assigns the security groups specified here to each instance created by the configuration.
  21.         - !Ref SecurityGroupAPI
  22.         UserData:                        # UserData is a base64-encoded shell script that runs after the instane is created as root.
  23.             Fn::Base64:                  # Fn::Base64 is a function key that base64-encodes the value string.
  24.                 !Join ["\n", [
  25.                     "#!/bin/bash",
  26.                     "service my-api-service start"
  27.                 ]]
  1. LaunchConfigurationWorker:
  2.     Type: AWS::AutoScaling::LaunchConfiguration
  3.     DependsOn:
  4.     - SecurityGroupWorker
  5.     Properties:
  6.         AssociatePublicIpAddress: true
  7.         BlockDeviceMappings:
  8.         - DeviceName: "/dev/sda1"
  9.           Ebs:
  10.               VolumeSize: "10"
  11.               VolumeType: gp2
  12.         ImageId: "ami-#########"
  13.         InstanceType: "t2.micro"
  14.         KeyName: ""
  15.         SecurityGroups:
  16.         - !Ref SecurityGroupWorker
  17.         UserData:
  18.             Fn::Base64:
  19.                 !Join ["\n", [
  20.                     "#!/bin/bash",
  21.                     "service my-worker-service start"
  22.                 ]]

The UserData section is commonly used to launch services already baked into the AMI. Once the instance is launched and the UserData runs, the output log is available to be viewed at /var/log/cloud-init-output.log on the EC2 instance.

Autoscaling groups

With all other pieces in place, we can finally set up the autoscaling groups. The advantages of autoscaling groups aren't restricted to simply scaling; as the ASG will handle immediate replacement of instances that erratically terminate, it may be worth setting up autoscaling groups around single EC2 instances as well. As CloudFormation stacks can be updated to new versions, instances within a group may also receive rolling updates, ensuring that the group is never completely out of service. This essentially allows for the hotswapping of single instance services, where both the old version and the new version is served by an ELB until the new version is fully ready to take over, and only at that point is the old version terminated.

  1. AutoScalingGroupAPI:
  2.     Type: AWS::AutoScaling::AutoScalingGroup
  3.     DependsOn:                     # Setting explicit dependencies will enforce the order in which AWS creates the resources when provisioning and deletes the
  4.     - SQSQueue                     # resources during deprovisioning. There are edge cases where CloudFormation fails to implicitly determine the correct order
  5.     - LoadBalancerAPI              # and thus it is recommended that you always specify explicit dependencies.
  6.     Properties:
  7.         DesiredCapacity: 3         # We want 3 instances as the initial number in the autoscaling group.
  8.         LaunchConfigurationName: !Ref LaunchConfigurationAPI
  9.         LoadBalancerNames:
  10.         - !Ref LoadBalancerAPI
  11.         MaxSize: 4                 # An autoscaling group may never have more instances than its maximum size, even during rolling updates. For that reason, it is
  12.                                    # recommended that the maximum size is always at least 1 greater than the desired capacity.
  13.         MinSize: 1                 # The autoscaling group will aim to have at least this many instances in service at all time. Prevents dynamic scaledowns
  14.                                    # from bottlenecking resources.
  15.         VPCZoneIdentifier:         # Specifies the subnets used by this autoscaling group.
  16.         - !Ref SubnetAPIZoneA
  17.         - !Ref SubnetAPIZoneB
  18.     UpdatePolicy:                  # Attaches a policy that describes how the group behaves during CloudFormation stack updates.
  19.         AutoScalingRollingUpdate:  # The AutoScalingRollingUpdate policy describes that a rolling update must be performed.
  20.             MaxBatchSize: 1        # No more than this many instances may be updated at a time.
  21.             MinInstancesInService: 1
  22.             PauseTime: PT1M        # The amount of time to pause between two batch updates. PT1M means 1 minute.
  1. AutoScalingGroupWorker:
  2.     Type: AWS::AutoScaling::AutoScalingGroup
  3.     DependsOn:
  4.     - SQSQueue
  5.     Properties:
  6.         DesiredCapacity: 3
  7.         LaunchConfigurationName: !Ref LaunchConfigurationWorker
  8.         MaxSize: 4
  9.         MinSize: 1
  10.         VPCZoneIdentifier:
  11.         - !Ref SubnetAPIZoneA
  12.         - !Ref SubnetAPIZoneB
  13.     UpdatePolicy:
  14.         AutoScalingRollingUpdate:
  15.             MaxBatchSize: 1
  16.             MinInstancesInService: 1
  17.             PauseTime: PT1M

These are all the basic components required for our stack that we can put into our template. Some resources, however, need to be external in this architecture. These are common to all stacks that aren't practical to be provisioned by CloudFormation for this architecture.

The complete source code for the CloudFormation template that we assembled up to this point is available as a gist for download. This template doesn't do a lot - it utilizes the default Ubuntu AMI for the us-east-1 region and doesn't run an application; but it does neatly showcase the relative simplicity of automatically provisioning an entire application infrastructure with the desired components.


First, let's talk about AMIs. The AMI is a prebuilt image that can be deployed on an EC2 instance. The default images provided by AWS contain pre-installed operating systems - such as Ubuntu among several Linux distributions or Windows Server. You can create your own AMIs by setting up an EC2 instance with an existing image, performing some operations and then compiling them into new AMIs. Note that AMI IDs are region-specific but AMIs can be copied across regions with new IDs. AMIs include a file system snapshot that we can use as a way for our compiled binaries to be deployed onto the EC2 instances specified in the launch configuration.

Default AMIs are shared across all AWS account. User-built AMIs are not accessible from outside the account that built it.

To manage AMIs, we use an application called Packer, which automates the process of creating AMIs from its own template format. To build an image using Packer, simply run the following command:

$ packer build image_name.json

Here is an example of a packer json file which generates an AMI:

  1. {
  2.     "variables": {
  3.         "build": "{{timestamp}}"
  4.     },
  5.     "builders": [
  6.         {
  7.             "type": "amazon-ebs",
  8.             "region": "us-east-1",
  9.             "source_ami": "ami-########",
  10.             "ami_regions": ["eu-west-1"],
  11.             "instance_type": "t2.micro",
  12.             "ami_name": "api_{{user `build`}}",
  13.             "ssh_username": "ubuntu",
  14.             "tags": {
  15.                 "generator": "packer"
  16.             }
  17.         }
  18.     ],
  19.     "provisioners": [
  20.         {
  21.             "type": "shell",
  22.             "inline": [
  23.                 "sudo apt-get update",
  24.                 "sudo apt-get install syslog-ng",
  25.                 "sudo chmod go+w /etc/syslog-ng/conf.d"
  26.             ]
  27.         },
  28.         {
  29.             "type": "file",
  30.             "source": "{{template_dir}}/api/etc/syslog-ng/conf.d/11-my-config.conf",
  31.             "destination": "/etc/syslog-ng/conf.d/11-my-config.conf"
  32.         },
  33.         {
  34.             "type": "shell",
  35.             "inline": [
  36.                 "sudo chmod go-w /etc/syslog-ng/conf.d"
  37.             ]
  38.         }
  39.     ]
  40. }

Let's break it down part-by-part.

The variables section is fairly simple. You can define values you can later refer to in different parts of the AMI template.

  1. "variables": {
  2.     "build": "{{timestamp}}"                            /* We define the current timestamp as a build ID. */
  3. }

The builders specify the build processes to run on the packer template. Each builder is run sequentially to produce packer outputs. For this exercise, we only want AMIs as our outputs, so we only use a single builder.

  1. "builders": [
  2.     {
  3.         "type": "amazon-ebs",                           /* Builds the AMI on an EBS backed EC2 instance. This is the most straightforward AMI builder. */
  4.         "region": "us-east-1",                          /* Specifies the base region to put the built AMI in. */
  5.         "source_ami": "ami-########",                   /* The base AMI ID to use for building. Usually a default image provided by AWS. Must be in */
  6.                                                         /* the same region as the destination AMI. */
  7.         "ami_regions": ["eu-west-1"],                   /* A list of other regions to copy the AMI to. */
  8.         "instance_type": "t2.micro",                    /* The instance type used for building the image. The result can be deployed to any instance type. */
  9.         "ami_name": "api_{{user `build`}}",             /* The output AMI name that we'll create. We refer to the build variable here that we defined above. */
  10.         "ssh_username": "ubuntu",                       /* The username for SSHing in. For the ubuntu base image, the default user is ubuntu. */
  11.         "tags": {                                       /* Arbitrary key-value pairs to assign to the AMIs. */
  12.             "generator": "packer"
  13.         }
  14.     }
  15. ]

Finally, the provisioners section describes the operations to perform during the AMI generation. Provisioners are executed sequentially within the build. It is not unusual that a build process requires several provisioners, as a limitation of the script is that all operations are performed as the ssh user. Unfortunately, this means that if we want to upload system configuration files, we require shell steps that makes them writable for the duration of the upload for the ssh user.

  1. "provisioners": [
  2.     {
  3.         "type": "shell",                             /* A shell provisioner runs a shell script. However, these commands are not run as the ssh user, not root. */
  4.         "inline": [                                  /* This is the list of commands in the shell script. */
  5.             "sudo apt-get update",
  6.             "sudo apt-get install syslog-ng",
  7.             "sudo chmod go+w /etc/syslog-ng/conf.d"
  8.         ]
  9.     },
  10.     {
  11.         "type": "file",                              /* The file provisioner uploads a local file to the specified path in the AMI. */
  12.                                                      /* {{template_dir}} is the directory where packer was run from. */
  13.         "source": "{{template_dir}}/api/etc/syslog-ng/conf.d/11-my-config.conf",
  14.         "destination": "/etc/syslog-ng/conf.d/11-my-config.conf"
  15.     },
  16.     {
  17.         "type": "shell",
  18.         "inline": [
  19.             "sudo chmod go-w /etc/syslog-ng/conf.d"  /* Don't forget to revoke all-write permissions from upload destinations before finishing. */
  20.         ]
  21.     }
  22. ]

Packer will output an AMI ID for all specified regions (the base region and any region in the aim_regions list). You can either directly copy the AMI ID for your preferred region into the CloudFormation stack yaml, or you can create a mapping in the CloudFormation template mappings section:

  1. Mappings:
  2.     AMI:
  3.         us-east-1: ami-########
  4.         eu-west-1: ami-########

You can refer to this mapping element using the FindInMap function.

ImageId: !FindInMap [ "AMI", !Ref "AWS::Region" ]


The other element we cannot create through CloudFormation is a key pair. Unlike the AMI, having a key pair is optional but highly recommended, as it allows developers to access stack instances through SSH.

Keypairs can be provisioned through the AWS interface under the EC2 service, or through the AWS CLI by issuing the following command:

$ aws ec2 create-key-pair --key-name=NAME

Provisioning through the interface allows you to download the private key once, when creating the key. When provisioning through the CLI, the private key will be part of the API response that is printed. In either case, make sure you save the private key immediately as you will not get another chance to access it.

Once a keypair is created and assigned to an instance directly or through the launch configuration, you can access the instance as the default user (for example, ubuntu for AMIs derived from the default ubuntu AMI provided by AWS) with the respective private key.

Deploying your template

Now that your template is complete, you can finally use it in CloudFormation for provisioning. You have two options:

Provisioning through the interface

You can use the AWS console to create a new stack in a few very simple steps.

First, navigate to the CloudFormation section of the AWS console. Click Create Stack. You will be asked to provide a template - choose the upload option and select the stack YAML file, then click Next.

On the next screen, you're asked to provide a stack name and any (or all) of the parameters that you defined for your template. In this example, the only parameter is DBPassword. Once you've filled the fields, click Next.

On this last screen, you may tag your CloudFormation stack with any amount of arbitrary key-value pairs. These are purely informational. You may also assign an IAM role to the stack; if you don't, AWS uses your currently logged in user as the stack's permission set. It may be required to set a special role if your user itself does not have sufficient permissions to create all stack components. Finally, you may also set some advanced options: you can specify an SNS topic that receives notifications about changes in your stack status; or you can add a stack policy that, for example, disallows updates that would result in the destruction of a resource. Click Next one more time.

Now you'll see a summary of what your stack parameters are, and a cost estimation for your stack. Click Create to begin provisioning your stack. You will be redirected back to the overview page for CloudFormation, where you can track your stack status. The events tab will provide you continuous updates about components being created in your stack. If the stack creation fails, your stack will enter ROLLBACK_IN_PROGRESS state, the components are deprovisioned, and then remains in ROLLBACK_COMPLETE state until manually deleted. Generally, you will find that the first CREATE_FAILED event in the events overview has an error message in the Status reason field that describes the error that occurred. Subsequent CREATE_FAILED events indicate cancellation of the stack provisioning due to the first failure.

Once your stack status is CREATE_COMPLETE, your cloud application is ready to use.

Provisioning through the CLI

You can also use the AWS CLI to initiate the provisioning in a single command. The syntax for this command is:

$ aws cloudformation create-stack --stack-name=STACK_NAME --parameters ParameterKey=PARAMETER_NAME1,ParameterValue=PARAMETER_VALUE1 ParameterKey=PARAMETER_NAME2,ParameterValue=PARAMETER_VALUE2 --template-body=`cat stack.yaml`

As an alternative to template-body, you can use template-url for loading the stack yaml from a public URL.


CloudFormation is a powerful tool that lets you template and replicate identical infrastructures, allowing you to set up multiple copies of the same application in the cloud with minimal effort. It is one of the most essential components to learn for any engineer who plans to work on AWS-based services.

Related posts

2020. március 3.

As cloud based architectures became more and more popular over the past decade, different organizations are facing different challenges when evaluating the possibility to leverage the managed services offered by various providers.