Image Credit: Docker, Inc.

It’s been a bit since the Nebulaworks team returned from DockerCon 2016. We all had a blast. Reconnecting with folks we trained at DockerCon 2015, seeing our current customers who we’ve helped with enablement and training, and being introduced to a number of new folks who are taking their first steps with Docker. Without a doubt the most frequent topic of discussion amongst attendees we talked with was Docker Datacenter and Engine 1.12, specifically:

What is the impact of Docker Datacenter (DDC) built using Docker 1.12 on the orchestration and tools market?

TL;DR it is big. It is going to single handedly change the way teams consider deploying docker-based apps. In this post, I’ll attempt to summarize why.

Let’s go back to June, 2015 and a typical platform build.

We had just sold the first commercial Docker subscription to a customer here in Southern California. Side note…Booz Allen stole the show with their GSA deal. Rightly so, it was quite large! So we took second, which I am still very proud of. But I digress.

We had been talking to a client for months and they saw the benefit of building their new application suite using a microservices architecture. The perfect software delivery artifact was a docker image. And to run them, they needed a platform that could accept an automated deployment via an API, automatically register and setup proxy services for all application endpoints, and when necessary scale services automatically. So, we set forth on the task of building a platform to accomplish all this. Some key figures from that build: 1.6 and .2.

We started with Docker Engine CS 1.6. For those who don’t know, that is the (first) customer supported engine bits from Docker. We laid this out over a few bare metal hosts, setting up the engines to use TLS – which took some doing, as in this case we needed to create our own CA as well as self-sign wildcard certs. From there, we needed to cluster all of the hosts together. Enter Swarm, version .2. Back then, Swarm was new. No multi-manager, no proxy services, nada. Just the docker API to schedule containers on hosts. Being a production deployment, we choose to use a K/V store (Consul) for cluster leader election, rather than less primitive options and secured that with TLS as well. With distribution of containers licked, we moved on to the next challenge: How to register and configure proxy services automatically. This proved to much more of a challenge.

But more than just Docker is needed…

Having a K/V store, we could spin up tooling that utilized the data store to keep a consistent view of what containers were running, where, and use the values to fix up our proxy servers. Based on the clients needs, the architecture we decided on was:

  • HAProxy
  • Registrator
  • Consul-template
  • Dnsmasq

Together, these formed the suite of tools which would detect container events (start, stop, terminate), drop the key information about the container in the K/V store (node IP and high-number port), write a new HAProxy configuration file, and provide name service resolution for all containers within the orchestration platform. So, in a nutshell solving the real challenge of running containers. For the more technically inclined, here was the process:

  1. Docker swarm starts a container
  2. Registrator, listening on the UNIX socket detects the event and drops the K/V pairs into Consul
  3. Consul-template, listening on the Consul port detects new services matching a template you create, in this case HAProxy (with service, node, port, and IP address values) recreates the HAProxy configuration file, and restarts HAProxy with this new conf file
  4. Dnsmasq allows registered containers and services to be located by name, either on the network or within docker containers themselves.  Using this, we could use CNAME or SVR records and remove all dependency on hardcoding

Definitely not a trivial build.

If you are still following along, this is not a trivial build. There were a number of items that needed to be worked out (which I cannot go into detail here) but suffice to say it is complicated. And any change could be a breaking change. But, after all the issues sorted, we ended up with a very stable platform, which today still hosts > 600 services, and became the basis for an AWS implementation.

You may ask, why not K8s. No supported distro at the time and it wasn’t even V1 yet. Not to mention, we needed a supported private registry and it Docker Hub Enterprise (predated DTR) was part of the commercially supported license. Why not Mesos/Marathon? Same issues as we solved for, plus the need to run ZK just for master election. Anti-pattern to spin up more than needed to solve a problem.

Back to the subject at hand.  Docker Datacenter and Engine 1.12’s impact.

So we move forward to today and the forthcoming Docker Datacenter release with Docker 1.12 (which we are beta testing with Docker). What does it bring to the table? If you answered everything, you are right. Everything that we put into that first platform and more. Swarm mode by default with multi-manager? Check. Distributed K/V store and consensus protocol? Check. PKI with TLS everywhere? Check. Rotating certs for TLS? Check again (and we didn’t do that automatically). Private registry, uh huh. Proxy services? Heck yeah, and it’s in kernel using IPVS that is higher performing than userland proxies (like NGINX and HAProxy). And lastly, it will bring about the general availability of Docker Stacks and Distributed Application Bundles (DAB). Today this is experimental, along with some other trick functionality (Macvlan and Ipvlan – which should technically be MACVLAN and IPVLAN). This rounds out an otherwise huge update to the entire docker stack.

Now, do you see why this is game changing? To get a platform running, no need for proxies, registration containers, K/V stores for leader election, templating services, etc. What’s the impact? While at DockerCon we talked to a number of the vendors who were there. From security to networking, orchestration to platforms there were folks trying to determine what their new business model would be. Save software defined storage for containers and logging/monitoring the platform and tooling business is finished. Move along, nothing to be seen here.

The only item that you’ll be considering now, is if you believe this is the “right” way for Docker to be moving as a company. That, my friends is a question you will need to answer. To some, this looks like a closed system, developed by a software company based on their opinion and that does make some sense. Using similar logic, tooling like Kubernetes and Mesos(sphere) are being developed in the open but originally designed to solve the problems encountered by large orgs like Google and Twitter…way beyond the scale of the Fortune 100 or better yet, the medium size business looking to roll out a 5-10 node environment (which can host 1000’s of containers with ease). Regardless of the container orchestration engine (COE) you choose, so long as you don’t code into your apps vendor-specific functionality the act of using containers provides flexibility to use whatever orchestration tools you like.

One last note on that choice. We believe, after building many platforms that you don’t want to or need to spend cycles trying to figure this out for yourself. It is better to get something that works out of the box with support. Use the time and expense saved integrating your development, build, test, and release into an end-to-end pipeline so you can deploy applications continuously. This is where the benefits of using containers is truly achieved, and what we believe most companies are looking to accomplish using a new generation of development methodologies and associated tools.