Nebulaworks Insight Content Card Background - Andrew buchanan concrete
If you are just waking up this morning and catching up with the latest announcements from DockerCon Europe there was a major one that is (or will be soon) grabbing everyone’s attention: Docker has announced that they are in the process of developing and will support the Kubernetes container orchestrator. Whaaatt? Yes, it’s true. Docker has joined the K8s game.
This is welcomed news!
I am sure that by now, the interwebs are pulsing with articles and stories about this from positive to negative vectors. I’m not interested in discussing or arguing those points, for the time being. Some may be valid, others, not so much. What I am interested in is providing my opinion, based on our experience working closely with the Docker team why this is a big deal for enterprise clients.
Beginning with Swarm
We’ve been building container platforms before Swarm and kube were a released version 1. In these early days, we very much approved of the stand-alone Swarm approach, one that we have used for building many UNIX like platforms integrating best-of-breed tools, like registrator, Consul, and HAProxy. These platforms were very robust, and are still running in production today. But they were at their best when the developers were completely adopting 12-factor practices, leveraging metadata services and writing runtime requirements and health checks directly into the application code.
Given the next wave of adoption was to lessen the need to create custom platforms and expand the user base capable of leveraging containers from just cloud-native and microservices to existing stacks, Docker set to work on integrating Swarm into the Docker engine itself, et voila, Swarm mode. While there were complexities with this (stacking the clustering into the engine, to name one) there also came significant advantages. Most importantly, when used in development, test, and production, there is a level of continuity unmatched with any other tools. The ability to use docker-compose across all environments is very compelling, and can definitely speed up the time from idle to running application code. That, after all, is the primary reason that we had folks wanting to jump into containerized workloads.
Let’s come back to this in a moment.
From Docker Engine to OCI
Meanwhile, there were quite a few parties that weren’t so keen on the idea of Swarm mode. Namely, folks supporting the Mesos project and folks supporting the Kubetnetes project. The move to bundle in the orchestration functionality with the engine ignited a tinderbox of dissent and resulted in fragmentation, which is not what was needed drive the container movement significantly forward to widespread support.
Within a very short period of time, there were discussions of forked docker. Returning to just the bits that were needed to take a container image and run it. No need for KV services, no need for the orchestration bits. Docker responded, creating the Open Container Initiative, to govern the creation of a set of standards for container images and runtimes. Why? Because there was a need for this considering a) the defacto standard of the docker image and b) the need to leverage this within other container orchestration environments. Great! Well, not so fast.
In my opinion, this was a divisive moment. Some groups leaning to do anything necessary to remove docker (the engine) from the equation of container use in the context of platforms and others continuing the adoption of docker’s tools, namely developers. So in the process of creating the OCI, which was meant to be a supporting structure, what really ended up happening was forking those supporting Docker and those who were not. This did nothing to support the business equation of adopting containers.
As you’ll see later, this created a big problem that I am hoping Docker is capable of solving with their Kubernetes announcement.
The ascension of Kubernetes
While all of the discussion was taking place with teams over the state; current and future, of the container runtime many people were contributing to the Kubernetes project. During this period, from mid-2015 to mid-2016, there was explosive growth in the number of folks committing to the project and newly announced features and functionality.
Many of these items were gaining traction with the TechOps teams who were being tapped on the shoulder to provide environments to support containerized workloads. Pods, Replication Controllers, and Services are concepts that ops teams can leverage to run containers in a way that makes sense to Google and others running containers at scale, in turn, fueling more advancements and maturity.
Simultaneously all of the major IT software vendors with deep ties to operations budgets announced they were going all in with Kubernetes, including RedHat and VMware. As such, you could say that all ops teams were learning about docker (the tool) and containers from the perspective of using Kubernetes to run container images.
Why do we want to use Docker containers again?
Let’s return back to the main reason that we liked containers, and Swarm from the get-go: It was SUPER easy a developer to create software, test, and deliver an image that would run unaltered anywhere there was a docker engine. This provided us not only one thing dev and ops could agree on (the container image) but also a way to instantiate that image. The docker engine provided the latter functionality with a deployment manifest (a docker-compose file), and Swarm extended that process with a cluster of engines. This promised enhanced speed to the SDLC, and, with the proper continuous pipelines provided new levels of velocity (what we call the IT Supply Chain). Great, right?
Again, not so fast.
Recall, ops teams were thinking Kube was the right way to run containers. That means Kube objects. That’s not what dev teams were thinking (docker-compose, and secrets, networking, etc.). And at this moment the single process to create, deliver, and run container images - and drive business value - seriously slowed.
Why Docker’s announcement is a BIG deal
Let’s bring this all together. There have been a few projects that have aimed to tie the docker dev world to the ops teams thinking Kubernetes. Some of these include Kompose (to convert docker-compose to kube objects), Yipee.io (tool to sit between dev/ops and handle cross-orchestration platform deployment), and minikube (single VM Kuberenetes deployment to run locally on a dev’s machine) to name a few. But in the end, they are largely tools to handle something that shouldn’t be an issue!
Here’s where Docker can step in and get us back on track: Provide an integrated toolchain that gets us back to what we had with a docker-centric workflow.
If we had the ability to use docker-compose (or a similar method) to develop and orchestrate containers on a single docker engine but also on Kubernetes WITHOUT TRANSLATION would be monumental. Combine this with building images locally (something that you can’t do with minikube without hacking it to expose the underlying docker engine), integrated support of their image registries (hub, store or DTR and image scanning) and easy consumption of docker content trust you have a recipe for success.
This would bring us back to the main reason we liked docker to begin with. Remember this image from the early Docker presentations?
I do! We’ll see how all of this comes together, but I for one hope for the sake of using containers to increase development velocity that the good folks at Docker get it right. Time will tell, but I am optimistic.
If you, your team, or company is trying to figure out Kuberenetes, docker images, continuous pipelines, infrastructure as code, please reach out to us.
Looking for a partner with engineering prowess? We got you.
Learn how we've helped companies like yours.