3 Must-know Facts About Docker Container Platforms

November 12, 2017 1:58 pm Published by -

Container Platforms FactsNebulaworks has been at the epicenter of containers, including container platforms, for almost four years. Let me provide some context. Our first orchestration platform built using multiple OSS projects went into production long before K8s was v1.0. In fact, at the time nobody had coined “CaaS.” Docker didn’t have a commercial offering, and the docker engine hadn’t been decomposed into the bits that comprise the engine today (runC, containerd). If we were helping you with an AWS-first strategy at the time, our recommendation would have been to use Beanstalk to run your containers. You can say we’ve seen it all, deployed every platform in production, both on-premises and in the public clouds.

And in this time we’ve found the following statements to be overwhelmingly true, regardless of the end user, where an orchestration platform is going to be deployed, or the company looking to deploy the tools:

  1. All of the orchestration platforms are complex engineered systems. Look under the cover of what is needed in a platform, like Kubernetes. You’ve got SDN, a distributed kv store, layers of reverse proxies, telemetry collection, DNS services, etc. You’ll need to at least understand the architecture and how this all works together!
  2. They are all self-contained, meaning, it is tough to extract runtime metadata that is created and using this to connect to external services that are outside of the orchestration platform and manage the consumption of these providers through policy-driven approaches.
  3. No platform has all the answers. No matter if you choose PaaS (deploying your apps as containers) or CaaS, they are opinionated systems, functionally extended largely by the edge cases the OSS contributors are facing. Take a gander at the open and closed issues, PRs and discussions for any of the supporting OSS projects used in the platforms. You’ll learn a lot about why certain decisions were made.

So where does that leave you? If you are just jumping into the container game, you are best served to evaluate the pre-assembled platforms against the value of building your own platform in terms of:

  • Having the appropriate skills and managing the initial build
  • Keeping up-to-date with changes and enhancements to the platform (i.e., mitigating further accumulation of technical debt
  • Providing something to the business an existing open source project can’t, and, at what cost
  • How the platforms integrate with a modern application delivery pipeline

From there, if you choose to roll your own container platform or are thinking of open source first, a sound pattern is emerging. Decompose the functionality these platforms are providing (connectivity, security, and runtime) and choose best of breed that can help enable multi-cloud. For example, you can choose docker, consul, vault, HAproxy, and nomad and build a very functional platform to run containers and non-container processes, with security that is far better than secrets distribution, and is open to brokering in/out of the platform (i.e., extending into existing IT infra/services).

Either way, you should look to experts to help with the choosing, implementation, and integration of these distributed systems. They’re not simple tools with point and click deployments. Even the “easy buttons” require you to make choices, and out of the box, they do not deliver often desired functionality like auto-scaling. And, taking theses from test cases to full production-grade deployments requires additional services and capabilities that, you guessed it, are not included in the container platforms.

To learn more about our approach checkout our Container Platforms page. If you already have one of these tools implemented, but are working towards more mature enterprise integrations, learn about Continuous App and Infrastructure Pipelines.

Categorised in: , ,

This post was written by Chris Ciborowski

All Aboard the Docker Campus Ambassador Program!

June 2, 2017 2:23 pm Published by -

We’ve been involved with education around Docker for sometime. From early on, Nebulaworks has provided official Docker training to folks all over the globe from organizations of all shapes and sizes. In addition, we’ve been giving back by hosting two Docker Meetups (one in Irvine, the other in San Diego) to help promote the tool amongst the community. And this week, Docker has taken the next logical and important step to building long-term support: the launch of the Docker Campus Ambassador Program.

Educating youth on new technologies

This is near and dear to my heart. And we are super excited! About two years ago, we started to work on an effort to bring interns into the fold at Nebulaworks. We went to local universities and started to talk to department heads and CS clubs. What we found was a real lack of the use of new technologies. I don’t know why we were so surprised – running with the latest and greatest isn’t something that higher ed is used to doing (and neither is enterprise IT, but I digress). But there were historical parallels that I personally witnessed. Read on.

Read More

Categorised in:

This post was written by Chris Ciborowski

Getting Started with LinuxKit on Mac OS X with xhyve

April 23, 2017 1:15 pm Published by -

One of the major announcements last week at DockerCon 2017 was LinuxKit, a framework for creating minimal Linux OS images purpose built for containers. Docker has been using the tools that make up LinuxKit for some time and the products derived from the tooling include Docker for Mac.

Sounds cool, and the best way to learn about a tool is to dive into using it! Given the extra time that I had on the plane home from Austin I did just that and would like to share with you an easy way to get started using LinuxKit.

To get going you’ll need a few things:

  • A 2010 or later Mac (a CPU that supports EPT)
  • OS X 10.10.3 or later
  • A Git client
  • Docker running (In my case, 17.04.0-ce-mac7 (16352))
  • GNU make
  • GNU tar
  • Homebrew

Let’s get started!

Installing xhyve

First, we’ll need to install xhyve. Xhyve is a hypervisor which is built on top of OS X’s Hypervisor.framework that allows us to run virtual machines in user space. It is what Docker for Mac uses under the hood! There are a couple ways to do this, the easiest is to use Homebrew. Fire up your favorite terminal and install:

Read More

Categorised in:

This post was written by Chris Ciborowski

Stand-alone Containerd is a Win for Everyone

December 14, 2016 5:39 pm Published by -

This morning, Docker has announced that they are extracting the core functionality of the docker engine, called containerd. It has been open sourced under a new project and will be governed by a yet-to-be-named independent foundation. This is a huge win for everyone using containers; from folks building applications using containers to the public clouds and third parties who provide container orchestration. Let’s talk about why.

For those of us who have been working with Docker for a while, we have been witness to some fairly contentious discussions. Many of these revolve around the core component of containers – the container runtime. It is the container runtime that is responsible for determining how a container image should be instantiated as a Linux process, “contained” by the various kernel mechanisms (such as namespaces and cgroups) and made available to the network stack and storage.

Credit: Docker, Inc.

The root of these discussions has been who should determine what should be included in the runtime as well as what should be included. As of late (especially following the 1.12 release of the Docker engine), this hit a fevered pitch and includes strong opinions levied by proponents of Kubernetes. Not to mention talk of forking docker itself.

Read More

Categorised in:

This post was written by Chris Ciborowski

Docker Container Basics: An Operations Guide, Part 1 of 3

August 4, 2016 10:40 am Published by -

Docker containers crash into the enterprise…

In the last three years Docker containers have become synonymous for containers and are often confused for some kind of über machine virtualization. By its own admission Docker is trying to make the lives of developers easier with these containers but… has this translated to a simpler environment for techops? The jury is still out on that question nevertheless, the most obvious gap for techops is generally still a knowledge gap.

In fact as soon as the subject of Docker comes up the marketecture starts to fly and phrases like Google infrastructure, infrastructure as code, continuous deployment and DevOps are soon to follow but none of it really spells out why operations should adopt containers. For those not initiated in the philosophies of Gene Kim, Martin Fowler, and the brotherhood of Googlers these oft uttered phrases leads techops to ask why is this better than the virtual infrastructure they already have. If your techops team is proactive then they will try to get in front of this Docker thing and will try to mold these containers into the überVM or maybe just try using them as a stand in for an on premise “ami”.

While spending the last 2+ years training and consulting for organizations who are adopting Docker into their ecosystems I have witnessed all these reactions from techops and more. So with Docker making this year’s theme one of expanding their ecosystem into the “enterprise” I thought it would be useful to go back to the basics for those techops teams that want to catch the wave before it crashes over them. This three part post will spell out why operations should use containers, how to get going with a test bed of your very own, and how operations can participate in the containerization process.

So why containers?

Read More

Categorised in:

This post was written by Joshua Bradley

Docker Datacenter, Engine 1.12, and their impact on orchestration

July 19, 2016 11:05 pm Published by -

It’s been a bit since the Nebulaworks team returned from DockerCon 2016. We all had a blast. Reconnecting with folks we trained at DockerCon 2015, seeing our current customers who we’ve helped with enablement and training, and being introduced to a number of new folks who are taking their first steps with Docker. Without a doubt the most frequent topic of discussion amongst attendees we talked with was Docker Datacenter and Engine 1.12, specifically:

What is the impact of Docker Datacenter (DDC) built using Docker 1.12 on the orchestration and tools market?

TL;DR it is big. It is going to single handedly change the way teams consider deploying docker-based apps. In this post, I’ll attempt to summarize why.

Read More

Categorised in:

This post was written by Chris Ciborowski

Got Docker…What about Orchestration? Part 2

June 17, 2015 10:11 pm Published by - Leave your thoughts

Here is part 2 of Got Docker…What about Orchestration?  In part one, we covered what an Orchestration Framework is and the major components which comprise them.  In case you missed it, the post can be found here.

List of Orchestration Frameworks

Now that we have a primer out of the way, let’s get into a few of the frameworks which are available at this time, who is developing them, a few strengths and where we see them playing.  This is not meant to be an all inclusive list, rather, just a few of the most popular options.  Also, there are many, many, features…

Docker Swarm

Docker Swarm (https://docs.docker.com/swarm/) is arguably one of the most simple frameworks available today.  Right now, it is in beta.  Functionality is being added quickly, however, some key items are still missing.  Number one is software defined networking.  Today, you cannot schedule workloads across docker hosts and link them together, seamlessly, out of the box.  It ships with a service registry and supports some of the most popular registries such as etcd, consul, and zookeeper.

Where Swarm shines is the integration with the standard Docker command line tools.  Once the Swarm cluster is created, launching a container on the cluster utilizes the standard Docker commands and the same holds true for managing the container.  It also supports some of the functionality of Docker Compose, allowing the turn-up of an entire application stack with one file and one command:  docker-compose up.

Swarm is by far the easiest tool to setup in order to have an orchestrated framework for running containerized applications.  However, due to the lack of some key tooling (distributed networking) it is not ready for large-scale deployments.  That said, for smaller environments consisting of a handful of Docker hosts, Swarm may be a good choice.   We typically recommend Swarm for clients with a few Docker hosts, and recommend an outside service registry and proxy service.

Kubernetes

Next in our list of frameworks is Kubernetes (http://kubernetes.io).  Developed by Google and loosely based on their large-scale container management tool Borg (https://research.google.com/pubs/pub43438.html) it is primarily aimed at deploying and managing containers at scale.  Also in beta, new functionality is being added quickly with a very large number of contributors to the project.  In fact, it is one of the fastest growing projects on GitHub.

Kubernetes is full featured, including networking abstraction, service registry (etcd), proxy services, and a command line tool.  You could say, that today it – with exception of being young, is well on its way to solving for many of the production challenges.  The goals of Kubernetes are to be:

  • lean: lightweight, simple, accessible
  • portable: public, private, hybrid, multi cloud
  • extensible: modular, pluggable, hookable, composable
  • self-healing: auto-placement, auto-restart, auto-replication

With the decades of running containers at scale, Google has already added and addressed some of the features which have not been introduced in other frameworks.  Both Services and Replication Controllers are two of these, which manage the lifecycle and health of your containers (deployed as pods) as well as name services  which also acts as a load balancer.

Being beta, we typically do not recommend production rollouts of Kubernetes, unless the company we are speaking to has significant experience with Linux and HPC.  However, it does have network abstraction solved for today, that is a big plus.  We feel that it is a solution for companies with many Docker hosts but it can scale down to smaller implementations.  There is a steep learning curve and there are a few companies are working on developing an easy to deploy distribution.  It is also is a technology being used in OpenShift (Red Hat PaaS) as well as CoreOS Tectonic.

Triton

Triton is brought to us by the folks at Joyent (https://www.joyent.com).   It too is fairly new to the scene, still in beta but about to go GA in the near term.  But there are some interesting things about Triton that make it vastly different than some of the other frameworks.

Triton is the first Docker container orchestration framework to not use Docker Engine as the runtime to execute containers.  Instead, it is based on SmartDataCetner, which in turn is based on SmartOS, which is based on Illumos, which in turn was based on OpenSolaris.  So, Triton has a long history of running containers – which are called Zones in the Solaris world.  Having that deep history many of the issues that the current crop of Linux-based frameworks are still trying to solve for have been addressed.   But this was not possible until the folks at Joyent resurrected LX branded Zones which allow Linux executables to run on SmartOS unmodified.   Once this translation layer was completed, the puzzle came together, with off-the-shelf resolutions to networking and security.   Solaris had long resolved these issues, allowing for unique IP addresses for single containers, as well as securing all processes belonging to a container from others running on the same host kernel.

But this greatness comes with a caveat:  You are not running a Docker Engine and libcontainer.  While the normal Docker client works with Triton not all the commands and functionality is available.  Joyent is close to Docker, but keeping parity with the API could be an issue if you want the latest and greatest functionality.  But, what it misses there, you gain with two other Solaris technologies which are game changing:  DTrace and ZFS.  Both of those are killer and worth their own blog posts in the future.

CoreOS

CoreOS (http://www.coreos.com) is an interesting set of tools and they provide a complete framework solution.  While they initially started out as a pure Linux play, the company has moved towards supporting far more than just Docker containers.   Late last year, CoreOS decided to launch their own container format to address concerns with the security and overall direction of Docker.  What is more interesting, is that while they have made it quite clear that they are not 100% aligned with the Docker approach, they are undoubtably supporting Docker.

CoreOS combines a number of stand-alone services which together to provide the orchestration functionality.  These include:

  • CoreOS:  Operating System
  • etcd: Consensus and Discovery
  • Rocket/Docker:  Container Runtime
  • Fleet:  Distributed init
  • Flannel:  Networking

One of the big benefits from using a complete solution from one vendor is that they will make sure that all of the components work together as one unit.  I am sure that the CoreOS folks are working towards that goal.  In addition, the CoreOS approach brings some interesting capabilities to bear, which are either not addressed with the other solutions or require the use of multiple tools from different developers (i.e. support paths).  First, with the fundamental building block of CoreOS, updates are a breeze.  Using the public Update Service (or CoreUpdate for subscribers) users can utilize a dual-partition process which updates a dual root scheme, allowing easy deployment of updates as well as providing a safe roll back process.  Cool.  Second, every CoreOS host runs etcd.  This provides native service discovery which allows the launch of containers across hosts and provide automatic configuration to applications,  services (like HAproxy), and enable multi-cloud deployments easily.  And the final component which brings everything together is fleet.  By creating a compute cluster from CoreOS hosts which resemble a single init system, containers can be scheduled across the cluster considering not only workloads but also affinity and constraints.

Mesos and Mesosphere DCOS

Mesos (http://mesos.apache.org) and the commercially developed product Mesosphere DCOS (https://mesosphere.com) are aimed at solving a bit of a different problem in the datacenter.  While certainly an orchestration framework, the power of these tools is to allow users to see an entire datacenter as a single kernel.  Basically, one big collection of CPU, memory, disk, and network which can have workloads (I’ll get back to this in a second) scheduled on it much like a process is scheduled on your laptop.

The Mesos framework provides the ability to run not only containers, but also short running, long running, and other applications across a single cluster.  Instead of having a dedicated cluster for Hadoop workloads and another for Docker containers, you could have a single cluster with all of the workloads running simultaneously on the same nodes.  This drives up utilization and overall efficiency of the data center.  Taking this a step further, Mesos and Mesosphere DCOS expose the underlying scheduler so that you can install other services to utilize the cluster in ways which may not be considered today.  Out of the box, Mesosphere DCOS supports Cassandra, HDFS, Kubernetes (what?), Kafka, and Spark.  They also support two other services – Marathon, the init system for long running processes and Chronos, a distributed cron system for short running processes.

Here’s where Mesos and Mesosphere DCOS gets interesting with Docker:  Kubernetes runs on top of Mesos or DCOS!  What does that mean?  You can address multi tenancy issues by running multiple K8s environments, or, run an updated K8s environment next to the current production environment for testing.  Cool.  Oh, and if K8s needs to scale, Mesos/DCOS handles that for you, scheduling workloads across additional nodes if necessary.

And that brings me to scaling.  This is something that happens automatically, handled by the data center kernel.  Need more capacity?  Scale out with one command.  Need to scale an application?  Scale out with one command (automatically added to the load balancer too).  And if you want to test that scale, there are both a traffic simulator as well as a failure simulator (chaos) which allow you to test, in real time, the effects of load or loss of services.

Final Thoughts

I hope you enjoyed the posts and they were valuable as a primer.  As I mentioned before, they were not meant to be an exhaustive list of all the orchestration frameworks which are available, nor were they meant to provide a detailed list of every technology  and component which included with each.  These are complex pieces of software and there is much more under the cover with each of these…and as such what works for one organization may not be a fit for another.

I feel that it is quite an exciting period that we are just beginning, where any company can take advantage of the tools which were developed to solve the unique challenges of distributed computing and distributed application delivery – especially at scale.  This is what we do daily at Nebulaworks, helping companies deliver software better, by helping you to understand and implement the tools and services available to achieve that goal.  If you need help with these or would like more information, please reach out, we’d love to help.

Categorised in:

This post was written by Chris Ciborowski

Got Docker…What About Orchestration? Part 1

May 5, 2015 1:07 pm Published by - Leave your thoughts

Shipping containers

Over the last year, Docker has had a rapid rise in popularity.  Developers have realized in order to get their job done having a tool like Docker greatly simplifies the instantiation of services to be used in their applications.  No longer do you need to spend time to do all of the ops legwork just to develop.  Cool eh?  Not so fast…

As most of us know – what works in Dev is not necessarily what works in production.  There are different requirements for each environment, namely production ops requiring far greater levels of uptime, security, and visibility into performance metrics and logging.  So, that takes us down the road of figuring out how to run containers with these requirements – at a minimum choosing a framework to solve these challenges.

At Nebulaworks, we’ve found that there are a few approaches.  In this multi-post series I will review the various orchestration/cluster frameworks that we have recommended to our customers as well as the considerations regarding why and when we would recommend them.  Like all things not all tools are created equal: Some are complicated and more feature rich while others are simple and straight forward.  Depending on your shop one will likely fit over the other.  But first, let’s do some definition.

What is an Orchestration/Cluster Framework?

Before we dive into orchestration frameworks are and the functionality they provide, it’s important to understand what they are what they include.  At a high level, they offer scheduling and networking of containers across multiple compute nodes.  You could (and some vendors do) call them cluster frameworks, however, I tend to stay away from that description based on the simple fact that it carries a legacy IT definition and with some of our clients a preconceived idea of what they do.  In my opinion, these tools are actually more closely aligned with high performance computing clusters (grids), whereby all compute resources look like a single machine for scheduling.  In fact, a couple of the tools that I will review go on to define themselves as a “Datacenter OS” and “Single System.”

In order to schedule workloads across multiple compute nodes these frameworks, at a minimum, are composed of the following tools and services:

Command Line Tool:  Self explanatory, the tools are executed within a shell to configure, launch, and manage container workloads or the orchestration framework itself.  They utilize the framework APIs to accomplish this task.

Controller/Scheduler:  A centralized service which either itself understands the underlying available resources and schedules workload instantiation, or, calls on other infrastructure tools or frameworks to determine how and where to instantiate workloads.

Compute/Container Runtime:  These compute instances are where containerized workloads are launched and scheduled.  Today, the primary technology for running workloads is Docker, with some recent announcements by vendors to support Rocket.

Service Discovery/Registry:  Typically a key/value pair datastore is where critical information about the configuration and status of the containers which have been launched, where they are running, state information, and other data is stored.  This is also used for discovery services for other newly instantiated containers.

Depending on the framework there can be other services configured as well, such as proxy services (to provide load balanced ports to container ports), name services (can be part of the service registry) to address new container workloads by name, simplifying tying together services, container health check services, and distributed storage.

In my next post, I’ll cover some of the more popular frameworks, their benefits and shortcomings.

Tags: , ,

Categorised in:

This post was written by Chris Ciborowski

What is Docker? A Simple Explanation

March 24, 2015 2:30 pm Published by - Leave your thoughts

Docker is an application build and deployment tool. It is based on the idea of that you can package your code with dependencies into a deployable unit called a container. Containers have been around for quite some time.  Some say they were developed by Sun Microsystems and released as part of Solaris 10 in 2005 as Zones, others stating BSD Jails were the first container technology. For a visual explanation, think of the shipping containers used for intermodal shipping. You put your package (code and dependencies) in the container, ship it using a boat or train (laptop or cloud) and when it arrives at its destination it works (runs) just like it did when it was shipped.

Before an industry standard container existed, loading and unloading cargo ships was extremely time consuming. All of this was due to the class of the materials being shipped: They were different shapes and sizes, so they couldn’t be stacked or moved in an efficient manner. Everything was custom, and this is called break-bulk shipping. The industry felt the pinch of high labor costs, waste of valuable shipping space, and no commonality between shipping companies and transportation vehicles.

Once a standard size and shape were developed, it revolutionized the shipping industry. The agreement on the containers specification decreased the shipping time, reduced cost and reinvented the way companies did business.

How will Docker impact us today?
Docker creates an isolated Linux process using software fences. For now, Docker is 100% command line (CLI) (update – not anymore, there are a few GUIs including Docker’s own Enterprise Edition). Certainly, launching Linux containers from the command line is nothing innovative. Nevertheless, the containers themselves are only a small part of the story.

The other part of the puzzle are images. Images are an artifact, essentially a snapshot of the contents a container is meant to run. As a developer, when you make a change to your code a new version of the image (actually a new layer) is automatically created and assigned a hash ID. Versioning between development, test and production are quick, seamless and predictable. Many longstanding problems in managing software systems are solved by Docker:

  • Management of applications: two applications that rely on different versions of the same dependency (like Java) can be easily coexist on the same operating system
  • Version control: Images are created using a text file (Dockerfile), so every previous image, and therefore container deployment is retrievable and re-buildable
  • Distributed management: a GitHub like repository to manage organization of images and deployment of application containers (called Docker Enterprise Edition, containing Docker Universal Control Plane and Docker Trusted Registry)
  • Low overhead: Unlike virtual machine hypervisors, Docker is lightweight and very fast, containers are small and boot instantly

One of the first things I noticed was how quickly you can make a change to an application and (re)deploy.  By using Docker, companies that need to scale for mobile or streaming applications the technology will do the same for their business as it did for the shipping industry. Seeing this first-hand at Nebulaworks, we coined the phrase “Application Logistics” as we’re moving apps quickly in and out of production as containers, across on premise and cloud infrastructure.

Categorised in:

This post was written by Gerry Fleming

Containers – the new guy on the block…so why all the noise?

March 10, 2015 1:44 pm Published by - Leave your thoughts

By now, you most certainly have heard there is a new virtualization tool on the block.  And this guy seemed to move into that quiet, well established neighborhood populated with virtual machines and instances and is creating quite a stir.  Like a non-stop house party for the past six months.

Since we are focused on delivering apps in efficient means and helping companies integrated CI/D with these tools we are always talking containers.  Fast forward to last week when I was having lunch with a non-engineer friend of mine who works in the tech industry.  Shockingly (at least to me) he had not heard of containers.  And on a weekly basis we meet with folks that “have heard of it” but do not have a good handle on the details.  Figured now is better than ever to shed some light on container tech.  This is meant to be a primer, and accomplish the following:

  • What are containers?
  • Various container tech available for use
  • How we see containers being used today

Let’s get started.

What are containers?

Back to that quiet street.  It is populated by virtual machines (or instances for the *Stack and AWS/GCE crowd), the concept having been around for a long while.  To review, a virtual machine is a construct which abstracts hardware.  By implementing a hypervisor, one takes a single computer, with CPU, memory, and disk, and divides it to allow a user to run a number of virtual computers (machines).  This was created largely to drive up the utilization of physical machines making data centers more efficient and allowing more workloads to run on the same amount of physical compute hardware.  And it works brilliantly.  The key here, is many operating systems per physical machine – which means many virtual machines to patch and manage as if they were single computers.  Each virtual machine has its own virtual hard drive and it’s own operating system.  But with this comes a ton of excess baggage.  All of those VMs likely have the same copies of binaries, libraries, and configuration files.  And, since we are booting multiple kernels on a physical machine, we add a ton of load to essentially run applications.

Enter the party-animal, the container.  Rather than providing a whole operating system, the container technology virtualizes an operating system.  Instead of abstracting hardware, container tech abstracts the resources provided by an operating system.  By doing so each container appears to a user or application like a single OS and kernel.  It isn’t though.  There is only one underlying kernel with all of the other containers sharing time.  This solves for a different problem.  Not driving up compute utilization (it certainly can) but instead addressing portability and application development and delivery challenges.  Remember all of that baggage with a VM?  To move it around, you have to bring it along.  And the format of the VM may not be supported on a different virtualization tech, requiring it to be converted.  Conversely, with a container – so long as a supported Linux version is installed – you build your app, ship the container, and run that code anywhere.  The container running on your laptop with your code can run, unmodified, on Amazon EC2, Google Compute, or a bare metal server.  This is very powerful.  Not to mentioned that creating orchestrated and automated provisioning of applications based on containers, at scale, is much easier than with VMs.  So much so that most large internet companies are already using containers (Google, Netflix, Twitter) and all platform as a service (PaaS) tools use a container technology for launching application workloads.

Screenshot_from_docker.io_about
Image credit: Docker, Inc.

What are the container technologies available today?

There are a handful of container technologies available in the wild for use today.  Over the past year, most everyone has become familiar with Docker, but in reality there are others which predate Docker by a large margin.  I’ll describe a few of them, some of the details around the technology, and their popularity.

Solaris Zones

Solaris has had container technology as part of the operating system, dating back to 2005.  There has been some confusion when referring to Solaris containers, as early on a container was a Solaris Zone, combined with resource management.  Over time, the container part has been dropped, and now are exclusively referred to as Solaris Zones.

As with other container technology, Solaris Zones virtualize an operating system, in this case Solaris x86 or SPARC.  There are two types of zone; global and non-global.   Global zones differ from non-global in that they have direct access to the underlying OS and kernel, and full permissions.  Conversely, non-global zones are the virtualized operating systems.  Each of those, can be a sparse zone (files shared from the global zone) or a whole root zone (individual files copied from the global zone).  In either case, storage use is not large due to the use of ZFS and copy on write technology.  Up to 8191 non-global zones can be configured per global zone.

Solaris zones are quite popular with system administrators using Solaris systems.  They provide a mechanism to drive higher utilization rates, while providing process isolation.  They are not portable outside of Solaris environments, so running Solaris workloads on public cloud providers is not an option.

Docker

Docker is a relative newcomer to the container scene.  Originally developed as a functional part of the dotCloud Platform as a Service offering, Docker was developed to enable dotCloud to launch workloads in their PaaS.  When dotCloud realized that they had a core technology on their hands which was valuable outside of their own PaaS.  As such, in 2013 the company pivoted, created Docker, Inc., and began to focus solely on the open source Docker project.

Docker is comprised of two major components.  The Docker engine, which is a daemon residing on a Linux operating system to manage containers, resources (CPU, memory, networking, etc.) and expose controls via an API.  The second component of the Docker stack, the Docker container itself, is the unit of work and portability.  It is comprised of a layered filesystem which includes a base image and instructions on how to install or launch an application.   What it does not have, are all of the bits which make up a whole OS.  When launching a Docker container, a user can specify a configuration file (DockerFile).  This file can include the following:   A base image is which installs only the necessary OS bits, specific information related to how an application should be installed/launched and specifics on how the container should be configured.

The key component to the rapid adoption of Docker is portability.  The basis of this is the Docker-developed open source library, libcontainer.  Through this library, virtually any Linux operating system, public, or private cloud provider can run a Docker container – unmodified.  Code that was developed on a laptop and placed into a container will work without changes on, for example, Amazon Web Services.  Libcontainer was the first standardized way to package up and launch workloads in an isolated fashion on Linux.

Rocket

In response to Docker, including a number of perceived issues in the libcontainer implementation, CoreOS went to work on an alternative Linux-based container implementation.  The CoreOS implementation also uses constructs to instantiate applications; isolating processes and managing resources.  Rocket as a project is focused on providing a container runtime that is composable, secure, distributed in nature, and open.  To support this, their initial specification and development effort – Rocket – is focused on providing the following.

  • Rocket:  A command line tool for launching and managing containers
  • App Container Spec (appc):
    • App Container Image: A signed/encrypted tgz that includes all the bits to run an app container
    • App Container Runtime: The environment the running app container should be given
    • App Container Discovery: A federated protocol for finding and downloading an app container image

The Rocket project prototype was announced in December, 2014.  Development has been swift, with new features and functionality added regularly.

How we see containers being used today

Container adoption is something that we have seen evolve very rapidly.   In a very short period of time, we have seen a number of companies begin to adopt container deployment.  Most of the early adopters are still utilizing the technology for development, providing an easy way for developers to write and deploy code to test/QA while still providing an deployment artifact which operations can peer into – confirming the details of what is being deployed.  Our container of choice for these initiatives is Docker, having a well developed ecosystem of official software and tool images.

In addition to development efforts, we have worked with organizations to implement container technology for testing purposes.  Containers are very well suited for this type of workloads.  Containers provide a mechanism to to instantiate a number of containers with a given application, including prerequisites services, which is very fast, utilizes few resources, and leaves little behind when testing is complete.  In addition, most CI/D platforms have hooks which are easily extended to use containers for testing purposes.  This includes Jenkins and Shippable.  For many companies, this is as far as they are comfortable taking integrating containers into their workflows.

Image credit:  Docker, Inc.

Image credit: Docker, Inc.

There are a handful of organizations who are embracing DevOps and see a value in placing containers into production, today.  We find these organizations very interesting and exciting to work with.  As a whole, attempting to place containers into production requires solving a number of critical issues, namely cross-host networking, managing persistent data, and service discovery.  The good news is that these are being addressed rapidly not only by the container developers but also by other technology providers.  By utilizing our Application Delivery methodology, we are able to identify services which are candidates for containerization and production deployment and provide the appropriate architecture and processes to migrate or deploy these into production.  We also employ the same methodology to define roadmaps for micro services-based application development and delivery.  Out of the companies which we are working with on container deployment, we have mainly utilized Docker and the emerging platform tools.  The ecosystem of operations tools as well as continuous delivery integration supporting Docker is more mature than other container technologies and for enterprise deployments this is an important consideration.

In summary, I hope that this has provided some insight into containers, the players, and how they can be used.  There is clearly a ton of work to be done in with the technology, but for now if you or your company is willing to embrace them you just may see time savings, increased productivity, reduced development and delivery timelines, and production deployment flexibility.

Tags: , , ,

Categorised in:

This post was written by Chris Ciborowski

Docker Machine: The easy way to Docker Hosts

February 21, 2015 6:50 pm Published by - 2 Comments

At the end of last year, Docker released Docker Machine.  It is, without a doubt, the easiest way to spin up Docker Hosts.  The best part, is that Docker Machine can do this across local resources (laptop with VirtualBox) but also AWS EC2, Digital Ocean, Google Compute Engine, VMware vSphere, and others.  In addition, Machine can setup Swarm clusters, which is pretty handy.  To see the full list of providers and README checkout the Docker Machine repository on GitHub.

If you don’t want to take the time to look at the README and figure it out, I’ve put together a 4 minute video on how to setup and use Docker Machine on your laptop using the VirtualBox provider.  This video is a little outdated, you won’t need to get the development Docker binaries now as the latest version (1.5.0) includes the identity authentication functionality.

Enjoy!

Tags:

Categorised in:

This post was written by Chris Ciborowski

Changing Application Delivery In the Enterprise Using Docker

January 13, 2015 6:55 pm Published by - Leave your thoughts

I recently had an article published by Superb Crew on how Docker is changing application delivery in the Enterprise.  Docker, and container technology in general, is changing the way that applications are being delivered – enabling teams to take advantage of any infrastructure without the fear of lock in.

For some time, the media and technical pundits have claimed that the cloud is here to stay and that organizations of all sizes are well on their way to delivering applications and services to their customers and internal clients via the cloud.

Unfortunately, our experience says otherwise.

Nebulaworks knows through experience that there are still quite a few hurdles which must be overcome before organizations, especially the enterprise, make the jump to using the cloud. One of these, vendor lock in, has been on many a CIO/CTO mind. With the considerable effort in adding and modifying business process, hiring specific cloud talent, and a shift in development methodologies the cloud presents a big risk. That is, until a little known organization by the name of dotCloud started working on a technology which supported a platform they were developing, eventually becoming Docker. Nebulaworks recognized the benefits early, understanding the impact that the technology would make and became the first company in Southern California partner with Docker.

Build, Ship and Run any App, Anywhere

That is the Docker motto, core to what Docker aims to accomplish. For all intents and purposes, instead of locking into a specific way to develop and deploy applications based on a cloud provider’s supported requirements, Docker aims at making that process as simple as a launching a container.

Consider a shipping analogy: Shipping carriers do not care what is in a container (per se) nor does the vessel differ if the container is transporting fruits and vegetables or consumer electronics. This is the fundamental benefit to Docker. Make the infrastructure which supports an application a commodity with generic requirements (the Docker Engine). You choose whatever fits your business best. This could be compute hosts in your data center or a public cloud provider like Softlayer. In fact, Docker is designed to be open and agnostic, as you can see in this image including all of the technologies that are supported.

Changing Application Delivery In The Enterprise Using Docker

The Nebulaworks cloud solutions and consulting business is focused on helping companies determine their corporate cloud direction. This includes assisting them with evaluation and implementation of technologies which deliver the promises of the cloud, transforming their business. Tools like Docker go to solve one of a number of critical issues in architecture and deployment of enterprise clouds.

Tags: ,

Categorised in:

This post was written by Chris Ciborowski