We have recently started a blogpost series about building a mid-sized project in Go with AWS, with unit testing and experimental plugin feature. In this session we will discuss AWS Go SDK. We will also begin to dissect the intricacies of Furnace.

AWS SDK

Despite the numerous and varied examples of AWS on Go SDK, it can still be quite complex and cryptic. Let’s not waste any more time on this issue and dissipate these confusions.

Getting Started and Developer’s Guide

The AWS documentation is top notch. The developer’s guide on the SDK is 141-page length document that contains a ‘getting started’ section as well as an API reference. Please check it out via the following link: AWS Go SDK DG PDF. I will touch on some basic tricks and tips that I’ve encountered, however, I will not cover the foundations of the SDK.

aws.String and Other Types 

Something which is immediately visible once we take a look at the API is that everything is a pointer. Now, there are a tremendous amount of discussions and debates on this topic. My stance is with Amazon. There are various reasons for the types being pointers. To list the most prominent reasons: 

  • Type completion
  • Compile time type safety
  • Values for AWS API calls have valid zero value  in addition to being optional, i.e. not being provided at all
  • Other options such as empty interfaces with maps, using zero values and struct wrappers around every type make life much harder rather than easier.
  • The AWS API is volatile; you never know when something will become optional or required. 

Pointers make the decision easy- you no longer need to worry about the state of the API.

For a more detailed discussion on this topic, please check out the following issue: AWS Go GitHub #363.

In order to use primitives, AWS has helper functions like aws.String. Because &“asdf” is not allowed, you will need to create a variable and use its address in situations where a string pointer is needed, for example, name of the stack. These primitive helpers will make in-lining possible. We’ll see later that they are used to a great extent. Pointers, however, make life a bit difficult when constructing Input structs and make for poor aesthetics.

Below is a what the code will look like when stubbing a client call:

 
  1.   return &cloudformation.ListStackResourcesOutput{
  2.             StackResourceSummaries: []*cloudformation.StackResourceSummary{
  3.                 {
  4.                     ResourceType:       aws.String("NoASG"),
  5.                     PhysicalResourceId: aws.String("arn::whatever"),
  6.                 },
  7.             },
  8.         }

As you can see, the code above doesn’t look so appealing.

Error handling
Errors also have their own types. An AWS error looks like this:

  1. if err != nil {
  2.     if awsErr, ok := err.(awserr.Error); ok {
  3.     }
  4. }

First we check whether the error is nil. After, we will type check whether the error is an AWS error. In the wild, this will look something like this:

  1. if err != nil {
  2.         if awsErr, ok := err.(awserr.Error); ok {
  3.             if awsErr.Code() != codedeploy.ErrCodeDeploymentGroupAlreadyExistsException {
  4.                 log.Println(awsErr.Code())
  5.                 return err
  6.             }
  7.             log.Println("DeploymentGroup already exists. Nothing to do.")
  8.             return nil
  9.         }
  10.         return err
  11.     }

If it’s an AWS error, check the error code more thoroughly. As seen in the above code example, I’m ignoring the AlreadyExistsException. 

Examples

Luckily the API documentation is mature. In most cases it provides an example to an API call. These examples however, from time to time, provide more confusion than clarity. Take CloudFormation for example. For me, when I first glanced upon the description of the API, it wasn’t immediately clear that the TemplateBody was supposed to be the whole template, and that the rest of the fields were almost all optional settings. Since the template is not an ordinary YAML or JSON file, I was searching for something that parses the template into the Struct that I wanted to utilize. After some digging, I realized that I didn’t need the parser; all I needed to do was to read in the template, define some extra parameters and give the TemplateBody the entire template. The parameters defined by the CloudFormation template were extracted by ValidateTemplate API call which returned all of them in a convenient []*cloudformation.Parameter slice. Information about this described functionality is not mentioned in the document nor is it visible from the provided examples. I discovered this through playing with the API and focused experimentation.
Waiters
Due to other SDK implementations, we’ve got used to Waiters. These handy methods wait for a service to become available or for certain events to take effect, for example, a Stage being CREATE_COMPLETE. The Go waiters, however, don’t allow for callbacks to be fired or for running blocks like the ruby SDK does. For my convenience, I wrote a handy little waiter. This waiter outputs a spinner to let us know that we are not frozen in time, but actively waiting. 

The waiter looks like this:

  1. // WaitForFunctionWithStatusOutput waits for a function to complete its action.
  2. func WaitForFunctionWithStatusOutput(state string, freq int, f func()) {
  3.     var wg sync.WaitGroup
  4.     wg.Add(1)
  5.     done := make(chan bool)
  6.     go func() {
  7.         defer wg.Done()
  8.         f()
  9.         done <- true
  10.     }()
  11.     go func() {
  12.                 counter = (counter + 1) % len(Spinners[config.SPINNER])
  13.             fmt.Printf("\r[%s] Waiting for state: %s", yellow(string(Spinners[config.SPINNER][counter])), red(state))
  14.             time.Sleep(time.Duration(freq) * time.Second)
  15.             select {
  16.             case <-done:
  17.                 fmt.Println()
  18.                 break    counter := 0
  19.         for {
  20.  
  21.             default:
  22.             }
  23.         }
  24.     }()
  25.  
  26.     wg.Wait()
  27. }

And I’m calling it with the following method:

  1.     utils.WaitForFunctionWithStatusOutput("DELETE_COMPLETE", config.WAITFREQUENCY, func() {
  2.         cfClient.Client.WaitUntilStackDeleteComplete(describeStackInput)
  3.     })

This will output the following lines to the console:

[\] Waiting for state: DELETE_COMPLETE

The spinner can be configured to one of the following types:

  1. var Spinners = []string{`←↖↑↗→↘↓↙`,
  2.     `▁▃▄▅▆▇█▇▆▅▄▃`,
  3.     `┤┘┴└├┌┬┐`,
  4.     `◰◳◲◱`,
  5.     `◴◷◶◵`,
  6.     `◐◓◑◒`,
  7.     `⣾⣽⣻⢿⡿⣟⣯⣷`,
  8.     `|/-\`}

Handy. And with that, let’s delve into the basics of Furnace.

Directory Structure and Packages

Furnace is divided into three main packages:

  • Commands
  • Config
  • Utils 

Commands

The commands package is where the gist of Furnace lies. These commands represent the commands which are used through the CLI. Each file has the implementation for one command. The structure is devised by this library: Yitsushi’s Command Library. At the time of writing this post, the following commands are available to use: 

  • create: Creates a stack using the CloudFormation template file under ~/.config/go-furnace 
  • delete: Deletes the created Stack. Doesn’t do anything if the stack doesn’t exist 
  • push: Pushes an application to a stack 
  • status: Displays information about the stack 
  • delete-application: Deletes the CodeDeploy application and deployment group created by push

These commands represent the heart of furnace. I would like to keep these to a minimum, but I do plan on adding more such as update and rollout. Further details and help messages on these commands can be obtained by running: ./furnace help or ./furnace help create.

  1. ❯ ./furnace help push
  2. Usage: furnace push appName [-s3]
  3.  
  4. Push a version of the application to a stack
  5.  
  6. Examples:
  7.   furnace push
  8.   furnace push appName
  9.   furnace push appName -s3
  10.   furnace push -s3

config

Config contains the configuration loader and some project wide defaults which are as follows: 

  • Events for the plugin system: pre-create, post-create, pre-delete, post-delete
  • CodeDeploy role name:  CodeDeployServiceRole. This is the default when none is provided to locate the CodeDeploy IAM role. 
  • Wait frequency: Is the setting which controls how long the waiter should sleep in between status updates. Default is 1s. 
  • Spinner:  Is the number of the spinner to be used. 
  • Plugin registry: Is a map of functions to run for the above events.

Furthermore, config loads the CloudFormation template and checks whether some necessary settings are present in the environment, exp: the configuration folder under ~/.config/go-furnace.

Utils

Please note: In a future version this package has been deprecated and is no longer a part of Furnance as `utils` and Helper Functions are an anti-patterns. The code provided by `utils` package live beside their respective users.

The following helper functions are used throughout the project:

  • error_handler: Is a simple error handler. I’m thinking of refactoring this one to a saner version. 
  • spinner: Sets up the particular spinner that will be used in the waiter function. 
  • waiter: Contains the verbose waiter introduced above under Waiters.

Configuration and Environment Variables

Furnace is a Go application, thus it doesn’t have the luxury of Ruby or Python where the configuration files are usually bundled with the app. But, it does have a standard for it. Usually, configurations reside in either of these two locations. Environment Properties or/and configuration files under a fixed location ( i.e. HOME/.config/app-name ). Furnace employs both.

Settings such as region, stack name and enable plugin system are under environment properties (though this can change ), while the CloudFormation template lives under ~/.config/go-furnace/. Lastly, Furnace assumes that the Deployment IAM role already exists in the AWS account. All these are loaded and handled by the config package described above.

Usage

A typical scenario for Furnace can be seen in the following:

  • Setup your CloudFormation template or use the one provided. The one provided sets up a highly available and self healing setting using Auto-Scaling and Load-Balancing with a single application instance. Edit this template to your liking then copy it to ~/.config/go-furnace.
  • Create the configured stack with ./furnace create.
  • Create will ask for the parameters defined in the template. If defaults are provided, by simply hitting enter, Furnace will use these defaults. Take note that the provided template sets up SSH access via a provided key. If that key is not present in CF, you won’t be able to SSH into the created instance.
  • Once the stack is completed, the application is ready to be pushed. To do this, run: ./furnace push. This will locate the appropriate version of the app from S3 or GitHub and push that version to the instances in the Auto-Scaling group. To all of them.

General Practices Applied to the Project

Commands
For each command the main entry point is the execute function. These functions are usually calling out the small chunks of distributed methods. Logic was kept to a bare minimum, (probably could be simplified even further), in the execute functions mostly for testability and the likes. We will see that in a follow up post.

Errors

Errors are handled immediately and usually through a fatal. If any error occurs, the application will be halted. In future versions, this might become more granular, i.e. don’t immediately stop the world, try to recover or create a Poller/Re-Tryer which tries a call again for a configured amount of times.

Output colors

Not that important, but still… Aesthetics. Displaying data to the console in a nice way gives it some extra flare.

Makefile

This project works with a Makefile for various reasons. Later on, once the project might become more complex, a Makefile makes it really easy to handle different ways of packaging the application. Currently it provides a linux target which will make Go build the project for Linux architecture on any other Architecture, i.e. cross-compiling.
It also provides an easy way to run unit tests with make test and installing with make && make install.

Closing Words

That is all for Part 2. Join me in Part 3 where I will talk about the experimental Plugin system that Furnace employs.

Cheers for reading! Gergely.