Nebulaworks Insight Content Card Background - Tim bish hardford building

Nebulaworks Insight Content Card Background - Tim bish hardford building

So You Want to Build a Terraform Pipeline? Three Things to Consider

October 4, 2018by Chris Ciborowski

Terraform is a powerful tool when used properly; here are three basic items to help you be successful in the creation of Terraform pipelines.

Recent Updates

HashiCorp Terraform

As one a premier provider of DevOps consulting services we’ve worked with a number of clients to develop strategies and tactical paths to facilitate the adoption of Infrastructure as Code (IaC) practices and toolchains. To accomplish this, we prefer to turn to HashiCorp Terraform as the tool to scaffold infrastructure, both in the public cloud and for on-premises needs. Terraform is a powerful tool when used properly…but with this power comes great responsibility and understanding three basic items will help you be successful in the creation of Terraform pipelines.

Let’s dive in.

Consideration #1: Driving Terraform with Jenkins

Most shops are on the mission to create pipelines to provide extremely high levels of automation across both development and IT operations functions. Historically, Jenkins has been the tool of choice, as it’s functionality as a continuous integration platform was unmatched. This said, most organizations have a familiarity with the tool. Build and release engineers, continuous pipeline engineers (CPE), and possibly operations staff tinkering with containers have all tried Jenkins.

Based on our experiences with both tools we have found that Jenkins may not the optimal way to drive terraform plan and terraform apply operations. While Jenkins is certainly capable of running the required commands and jobs and there have been folks who have written about creating a simple pipeline, without considerable additional logic and manual gates for the enterprise this is substandard.

For example, a Groovy pipeline that checks out the repo, and then iterates through init, plan and apply steps (as is outlined in the post referenced above) is simple to create. All looks great. Upon non-zero completion, you’ll have whatever you’ve built using HCL, ready for your next step(s).

But there are times when Terraform operations may result in non-idempotent operations. Carelessly inserting -auto-approve into the pipeline for purposes of forcing automation can be a recipe for disaster.

One way to solve for this when using Jenkins to drive Terraform operations is to understand the process of terraform plan. Evaluating the plan output with tests is critically important. Based on the infrastructure you are creating, this is the first step of testing that is required. Using formatting tools, like terraform-landscape can assist with this process. Also, some management of locking operations on the branch during a plan step is important, as the plan is not persisted and any changes to a common, active branch can impart inconsistencies in testing, potentially creating false positive results. Plan and apply operations are dependent, and the latter will fail if the compiled plan is not consistent, but there is room for drift.

Our recommendation, along with that of HashiCorp is to NOT use -auto-approve on apply operations unless the environment is being scaffolded for test, dev, or other “one shot” infrastructure.

Consideration #2: Plugin Management and Version Pinning

Due to the nature of Terraform, providing an abstraction to underlying services and tools that actually create and manage infrastructure, APIs and endpoints are under a constant state of change. Tools like Kubernetes and cloud providers like Amazon Web Services are being updated regularly, with new functionality added and old capabilities deprecated.

This itself provides a challenge when using Terraform. Considering multiple actors (as in a distributed team) the normal operation of Terraform and consumption of plugins can be problematic, inserting unwanted variability into operations. During the default Terraform operation during the init process, any plugins required to complete tasks will be downloaded and installed. But in the process of developing pipelines, this behavior may not be optimal.

To resolve this issue, it is our recommendation that plugins be downloaded and stored for reuse on the system that is completing the Terraform operations. By doing this, any operation will use a common set of plugins that can be modified and updated on an as-needed basis. Once this process is established, consistency is established with shared, automated processes. However, it does create another set of complexities if a CI/CD tool is driving the Terraform processes, like pre-baking images used in a pipeline or providing shared paths with cached plugins.

In addition to plugins, the version of Terraform should also be pinned in the configuration file. We have found that when working with teams that are moving quickly, using short sprints (or no sprints for that matter) the rate of change is astonishing. While this is a good thing, it is the primary cause for inconsistencies that can cause issues with plan and apply operations on existing infrastructure. To this end, setting the version and/or utilizing constraints for Terraform itself is highly recommended.

Consideration #3: Managing Drift

As I mentioned at the top of this post, Terraform is a powerful tool. It has the capability, out of the box and with open source providers to do automate a substantial part of managing infrastructure.

However, we have seen many teams attempt to chain together Terraform with other tools, provisioning configuration management to handle items outside of Terraform, and some even attempting to provision infrastructure through tools that themselves provide a layer of automation. These all can have negative side effects, where manual changes (or automatic changes - like autoscaling) can drive irregularities. Once changes are made outside of Terraform, how are these managed and resolved? This is referred to as drift.

One method is to separate application deployments and updates from the Terraform code used to construct or manage infrastructure. Evaluate the codebase against any item which will create an idempotent situation and then remove this and refactor to another process and pipeline.

Another common occurrence in distributed teams is attempting to manage infrastructure both through Terraform as well as through a GUI, such as the AWS Console. When this takes place, a new Terraform operation will evaluate the state of what is currently deployed against the desired configuration, notifying you of the required actions that will take place in order to reconcile the change. Fortunately, HashiCorp has included a lifecycle configuration option to assist with managing aspects of infrastructure operations that would result in different real-world state. By leveraging these options refresh, plan, and apply operations can pass even if drift has taken place and can continue without destroying and recreating assets.

And, it’s probably not a bad idea to back up the state file, too.

If your team is interested in or currently evaluating Terraform, the Nebulaworks team is here to help. Chat with us!

Insight Authors

Nebulaworks - Wide/concrete light half gray

Looking for a partner with engineering prowess? We got you.

Learn how we've helped companies like yours.

Don't miss a beat. Stay up-to-date with our monthly DevOps Wire newsletter.

A curation of informative industry insights delivered to your inbox monthly.

Error: {message}
Thanks for subscribing!