Nebulaworks Insight Content Card Background - Mikita yo office city building

Nebulaworks Insight Content Card Background - Mikita yo office city building

Three Essential Machine Image Creation Rules To Utilize

June 1, 2020by Matthew Shiroma

Best practices for more consistent deployments leveraging machine image creation.

Recent Updates

Introduction

A successful technology company is more than the product they produce. It is about how their tech is developed; the processes behind its creation. As Gene Kim puts it, “Improving daily work is even more important than daily work.” A flawed workflow can have severe repercussions, such as long, unnecessary iterations, and stressful releases. Maintaining a process cycle that adapts over time and cuts needless work is crucial in today’s world. The savings that come with a well-made process is huge; more time to focus on innovation and consumer relations. In other words, the process behind a product needs to be considered top priority before any work is done on the product.

While there are numerous design choices that can go into this process making, one choice, in particular, should always be considered, especially when managing infrastructure: Infrastructure as Code (IaC). This idea is more than merely hardcoding values; it allows for both fine granularity, consistency, and reliability. IaC motivates engineers to write code that can not only be deployed much faster, but it also lowers the risk of failures upon building a release, since the engineer has much higher control of their product. In particular, when utilizing this practice in virtual machine image creation, this opens new doors for engineers, allowing more time to be spent on refining the product. However, as with all software practices, there are three essential machine image creation rules to utilize:

  1. Separating infrastructure from image creation
  2. Creating flexible, but stable images
  3. Optimizing image build speeds

As such, this blog post will strive to provide some insight into these three points.

All code shown is pseudo-code, they are meant to serve as an example

Separating Infrastructure from Image Creation

The first rule when building images is differentiating the location of specific configuration in a deployment. To give an example, imagine defining an Amazon Machine Image (AMI) for an Elastic Compute Cloud (EC2) instance. The main goal is predefining specific software pieces and environment variables in the image. One IaC solution for image creation is Packer, which will be utilized in this blog.

# A snippit of Packer
$ cat example_packer_image.json
{
    "variables": {
        "ENV_NAME": null,
        "RHEL_SOURCE_AMI": null,
    },
    "builders": [
        {
            "name": "ec2-builder",
            "type": "amazon-ebs",
            "region": "us-west-2",
            "source_ami": "{{user `RHEL_SOURCE_AMI`}}",
            "instance_type": "t3a.medium"
            "tags": {
                "RHEL_SOURCE_AMI": "{{user `RHEL_SOURCE_AMI`}}",
                "ENV": "{{user `ENV_NAME`}}",
            }
        }
    ]
    ...
}

Once the image has all the necessary packages and variables declared, the next step is defining the infrastructure that will leverage this image. Now comes that question again: What configuration should be declared in the infrastructure, or vice versa?

Config Option 1: CLI Tools/Package downloads

When it comes to installing tools and their respected packages (i.e. python, docker, AWS CLI) these should always be prebaked into an image since they rarely change in the course of spinning up an environment. To make sure these tools are always consistent with each install, pinning them to a specific version is a must. Doing so will guarantee that the initial image configuration will have an idempotent result.

# We create an example_variables.json that will be parsed with example_packer_image.json
# Which will be interpolated into example_packer_image.json

$ cat example_variables.json
{
    "RHEL_SOURCE_AMI": "ami-0c2dfd42fa1fbb52c",
    "ENV_NAME": "sandbox",
    "SOME_SOFTWARE_VERSION": "1.0.0",
}

$ cat example_packer_image.json
{
...
    "provisioners": [
        {
            "type": "shell",
            "execute_command": "{{.Vars}} sudo -E -S bash '{{ .Path }}'",
            "inline": [
                yum install some_software-{{user `SOME_SOFTWARE_VERSION`}},
            ],
            "environment_vars": [
                "ENV_NAME={{user `ENV_NAME`}}",
                "SOME_SOFTWARE_VERSION={{user `SOME_SOFTWARE_VERSION`}}",
            ]
        },
    ]
}

Config Option 2: Environment Specific Details

If there are specific components in the cloud environment (networking, logging, etc) that is hosting the infrastructure, this is where some trouble can occur for the right location of that configuration. If the infrastructure that’ll be deployed will be used in different environments, that configuration should be defined separately from the image.

# We should NOT put an environment specific configuration in the image, since this would cause us to build new images
# for each environment. Instead, we should put that variable elsewhere in our development process.

$ cat bad_example_variables.json
{
    "RHEL_SOURCE_AMI": "ami-0c2dfd42fa1fbb52c",
    "ENV_NAME": "sandbox",
    "SOME_SOFTWARE_VERSION": "1.0.0",
    "ENVIRONMENT_URL": "https://something"
}

$ cat bad_example_packer_image.json
{
...
    "provisioners": [
        {
            "type": "shell",
            "execute_command": "{{.Vars}} sudo -E -S bash '{{ .Path }}'",
             "inline": [
                yum install some_software-{{user `SOME_SOFTWARE_VERSION`}},
                echo "*.*  @@${ENVIRONMENT_URL}:514" >> /etc/rsyslog/rsyslog.conf,
            ],
            "environment_vars": [
                "ENV_NAME={{user `ENV_NAME`}}",
                "SOME_SOFTWARE_VERSION={{user `SOME_SOFTWARE_VERSION`}}",
                "ENVIRONMENT_URL"={{user 'ENVIRONMENT_URL'}}
            ]
        },
    ]
}

In the example above, alongside installing a package, we are also configuring remote logging via rsyslog. However, because we are setting the endpoint to be specific in the image, if we utilize this image in an environment that does not have connectivity to that endpoint, this image will need to be rebuilt with the proper endpoint.

While it can be optimal to cache environment components in the image, it locks that created image to those particular environments. All environments that are using that built image need to have those same values reflected in its infrastructure as well, otherwise, either the image or the environment will need to be modified. This can cause unnecessary headaches and issues when new environments are created, where they may not necessarily reflect what the last image had prebaked in it. In other words, it is important to understand what components should be placed in an image and what should not be placed in an image.

Creating Flexible, but Stable Images

Another aspect of image creation is creating flexible, modular code that is stable for numerous reuses, regardless of their use case. Building off from the previous section, it is crucial to utilize variables in images so that building a unique image is as simple as changing a value in the variables.json. That way, the image’s code can be reused numerous times without having to constantly go into the code, change a value in place, and rebuild the image. A reasonable example of this is defining a software’s version in the variable.json. From our earlier example showcasing how the CLI/Package are prebaked into the image:

$ cat example_variables.json
{
    "RHEL_SOURCE_AMI": "ami-0c2dfd42fa1fbb52c",
    "ENV_NAME": "sandbox",
    "SOME_SOFTWARE_VERSION": "1.0.0",
}

$ cat example_packer_image.json
{
...
    "provisioners": [
        {
            "type": "shell",
            "execute_command": "{{.Vars}} sudo -E -S bash '{{ .Path }}'",
             "inline": [
                yum install some_software-{{user `SOME_SOFTWARE_VERSION`}},
            ],
            "environment_vars": [
                "ENV_NAME={{user `ENV_NAME`}}",
                "SOME_SOFTWARE_VERSION={{user `SOME_SOFTWARE_VERSION`}}",
            ]
        },
    ]
}

This allows for a streamlined process to upgrade the software’s version to whatever we passed in. However, not all cases for using variables is as straightforward. Consider this: Suppose that the image will have the ability to set an environment-specific value when we built the image. That should be fine since we are making it flexible for other use cases. Does that fall under the flexible image rule? (If you’ve been paying attention, the answer is a no brainer)

Those kinds of variables should never be in the image. It leads back to the issue that was mentioned in the first point about separating certain configuration away from image creation. How does one make an image that can be suited to leverage that variable, if we cannot define it in the image? Recall that images should be flexible enough to output the same end results regardless of what values we interpolate in. This behavior also implies that images are stable enough to where they can be the basis for the additional configuration that is added later on. In other words, images should be written with a mindset that they will only have the bare necessities for each deployment. Looking back at the bad code example:

$ cat bad_example_variables.json
{
    "RHEL_SOURCE_AMI": "ami-0c2dfd42fa1fbb52c",
    "ENV_NAME": "sandbox",
    "SOME_SOFTWARE_VERSION": "1.0.0",
    "ENVIRONMENT_URL": "https://something"
}

$ cat bad_example_packer_image.json
{
...
    "provisioners": [
        {
            "type": "shell",
            "execute_command": "{{.Vars}} sudo -E -S bash '{{ .Path }}'",
             "inline": [
                yum install some_software-{{user `SOME_SOFTWARE_VERSION`}},
                echo "*.*  @@${ENVIRONMENT_URL}:514" >> /etc/rsyslog/rsyslog.conf,
            ],
            "environment_vars": [
                "ENV_NAME={{user `ENV_NAME`}}",
                "SOME_SOFTWARE_VERSION={{user `SOME_SOFTWARE_VERSION`}}",
                "ENVIRONMENT_URL"={{user 'ENVIRONMENT_URL'}}
            ]
        },
    ]
}

To make this image more flexible, the ENVIRONMENT_URL variable can be removed from both files, which will be sufficient enough for this image to be a good basis. That value can be added to the configuration later in the deployment. That way, this image can be used as many times as needed, only needing to be rebuilt if we need a new software version. With that change, the first image that is built from this config can be used essentially for the rest of a development cycle. As such, the most optimal way to write our image is:

$ cat example_variables.json
{
    "RHEL_SOURCE_AMI": "ami-0c2dfd42fa1fbb52c",
    "ENV_NAME": "sandbox",
    "SOME_SOFTWARE_VERSION": "1.0.0",
}

$ cat example_packer_image.json
{
...
    "provisioners": [
        {
            "type": "shell",
            "execute_command": "{{.Vars}} sudo -E -S bash '{{ .Path }}'",
             "inline": [
                yum install some_software-{{user `SOME_SOFTWARE_VERSION`}},
            ],
            "environment_vars": [
                "ENV_NAME={{user `ENV_NAME`}}",
                "SOME_SOFTWARE_VERSION={{user `SOME_SOFTWARE_VERSION`}}",
            ]
        },
    ]
}

# If we were to build this packer image, this is what it would look like:
$ packer build -var-file=example_variables.json example_packer_image.json

==> amazon-ebs: amazon-ebs output will be in this color.

==> amazon-ebs: Creating temporary keypair for this instance...
==> amazon-ebs: Creating temporary security group for this instance...
==> amazon-ebs: Authorizing SSH access on the temporary security group...
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Waiting for instance to become ready...
==> amazon-ebs: Connecting to the instance via SSH...
==> amazon-ebs: Stopping the source instance...
==> amazon-ebs: Waiting for the instance to stop...
==> amazon-ebs: Creating the AMI...
==> amazon-ebs: AMI: ami-XXXXXXX
==> amazon-ebs: Waiting for AMI to become ready...
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
==> amazon-ebs: Build finished.

==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:

us-west-2: ami-XXXXXX

Optimizing Image Build Speeds

The last aspect of image building is the speed of the build process itself. Building images is slow, especially if there is no way to cache steps from earlier builds. As such, the success of leveraging image creation can come down to the speed of an image being built. Again, recall the previous point where image creation is the best where an image only needs to be built once unless a specific change needs to occur. That point is crucial to master; any deployment utilizing an image-building process will be severely hindered by the image’s build time if that image was poorly written to accommodate for modularity.

To put it into perspective, suppose that the image’s code did not accommodate the first two rules: configuration differentiation and modular code. When the image gets built, it takes ~5 minutes to complete. Not bad, that image is then placed into a deployment, with the deployment taking about ~5 more minutes to complete. However, an issue comes up in the deployment, the cause being a small typo in the image’s code. After a hotfix, the image is built again, which takes another 5 minutes. The deployment is restarted, which is another 5 minutes. At this point, it should be clear that every time an image needs to be fixed, rebuilding the image takes time. Now, consider the case where certain key variables that fall under the category of not being in the image are in the image. With each deployment, the image needs to be rebuilt each time. Development goes to a crawl, because of all of the time spent waiting for the image being built for each deployment. Not only that, but when it comes to vetting out a particular solution in a deployment, image build times will be a huge factor in whether the development of that process takes a few hours or a few weeks (a bit exaggerating, but the point still stands, image builds take a lot of time out of a day).

While there is no true way to bypass some of the image’s build time, it is important to know that subsequent image builds can be prevented by following the first two points mentioned earlier, separating infrastructure from image creation and creating flexible, but stable images. Abiding by those two points will not only save time, but it will also free up crucial development time to look into more pressing deployment matters. Bottlenecks in deployments are frustrating, and while images as code is supposed to alleviate that time, not following specific guidelines will only make image creation as slow as a manual process.

Wrapping It All Up

Creating a process that can be used over and over, and adapt over time is challenging. However, when the proper decisions are put in place, this feat becomes second nature for companies to develop. IaC, for instance, is crucial when it comes to writing soundproof deployments. By defining infrastructure in a way that can be tracked by version control, teams become more aware of what is being deployed. With one application of IaC being image creation, it is crucial to understand the basic rules when it comes to writing image creation code. With these three essential image creation rules to utilize, separating infrastructure from image creation, creating flexible, but stable images, and optimizing image build speeds, your team and processes will improve tremendously.

Keep on coding and stay safe!

Insight Authors

Nebulaworks - Wide/concrete light half gray

Looking for a partner with engineering prowess? We got you.

Learn how we've helped companies like yours.

Don't miss a beat. Stay up-to-date with our monthly DevOps Wire newsletter.

A curation of informative industry insights delivered to your inbox monthly.

Error: {message}
Thanks for subscribing!