November 12, 2017 1:58 pm
Nebulaworks has been at the epicenter of containers, including container platforms, for almost four years. Let me provide some context. Our first orchestration platform built using multiple OSS projects went into production long before K8s was v1.0. In fact, at the time nobody had coined “CaaS.” Docker didn’t have a commercial offering, and the docker engine hadn’t been decomposed into the bits that comprise the engine today (runC, containerd). If we were helping you with an AWS-first strategy at the time, our recommendation would have been to use Beanstalk to run your containers. You can say we’ve seen it all, deployed every platform in production, both on-premises and in the public clouds.
And in this time we’ve found the following statements to be overwhelmingly true, regardless of the end user, where an orchestration platform is going to be deployed, or the company looking to deploy the tools:
- All of the orchestration platforms are complex engineered systems. Look under the cover of what is needed in a platform, like Kubernetes. You’ve got SDN, a distributed kv store, layers of reverse proxies, telemetry collection, DNS services, etc. You’ll need to at least understand the architecture and how this all works together!
- They are all self-contained, meaning, it is tough to extract runtime metadata that is created and using this to connect to external services that are outside of the orchestration platform and manage the consumption of these providers through policy-driven approaches.
- No platform has all the answers. No matter if you choose PaaS (deploying your apps as containers) or CaaS, they are opinionated systems, functionally extended largely by the edge cases the OSS contributors are facing. Take a gander at the open and closed issues, PRs and discussions for any of the supporting OSS projects used in the platforms. You’ll learn a lot about why certain decisions were made.
So where does that leave you? If you are just jumping into the container game, you are best served to evaluate the pre-assembled platforms against the value of building your own platform in terms of:
- Having the appropriate skills and managing the initial build
- Keeping up-to-date with changes and enhancements to the platform (i.e., mitigating further accumulation of technical debt
- Providing something to the business an existing open source project can’t, and, at what cost
- How the platforms integrate with a modern application delivery pipeline
From there, if you choose to roll your own container platform or are thinking of open source first, a sound pattern is emerging. Decompose the functionality these platforms are providing (connectivity, security, and runtime) and choose best of breed that can help enable multi-cloud. For example, you can choose docker, consul, vault, HAproxy, and nomad and build a very functional platform to run containers and non-container processes, with security that is far better than secrets distribution, and is open to brokering in/out of the platform (i.e., extending into existing IT infra/services).
Either way, you should look to experts to help with the choosing, implementation, and integration of these distributed systems. They’re not simple tools with point and click deployments. Even the “easy buttons” require you to make choices, and out of the box, they do not deliver often desired functionality like auto-scaling. And, taking theses from test cases to full production-grade deployments requires additional services and capabilities that, you guessed it, are not included in the container platforms.
To learn more about our approach checkout our Container Platforms page. If you already have one of these tools implemented, but are working towards more mature enterprise integrations, learn about Continuous App and Infrastructure Pipelines.
Categorised in: DevOps, Docker, Kubernetes
This post was written by Chris Ciborowski
October 17, 2017 1:19 am
If you are just waking up this morning and catching up with the latest announcements from DockerCon Europe there was a major one that is (or will be soon) grabbing everyone’s attention: Docker has announced that they are in the process of developing and will support the Kubernetes container orchestrator. Whaaatt? Yes, it’s true. Docker has joined the K8s game.
This is welcomed news!
I am sure that by now, the interwebs are pulsing with articles and stories about this from positive to negative vectors. I’m not interested in discussing or arguing those points, for the time being. Some may be valid, others, not so much. What I am interested in is providing my opinion, based on our experience working closely with the Docker team why this is a big deal for enterprise clients.
Beginning with Swarm
We’ve been building container platforms before Swarm and kube were a released version 1. In these early days, we very much approved of the stand-alone Swarm approach, one that we have used for building many UNIX like platforms integrating best-of-breed tools, like registrator, Consul, and HAProxy. These platforms were very robust, and are still running in production today. But they were at their best when the developers were completely adopting 12-factor practices, leveraging metadata services and writing runtime requirements and health checks directly into the application code.
Given the next wave of adoption was to lessen the need to create custom platforms and expand the user base capable of leveraging containers from just cloud-native and microservices to existing stacks, Docker set to work on integrating Swarm into the Docker engine itself, et voila, Swarm mode. While there were complexities with this (stacking the clustering into the engine, to name one) there also came significant advantages. Most importantly, when used in development, test, and production, there is a level of continuity unmatched with any other tools. The ability to use docker-compose across all environments is very compelling, and can definitely speed up the time from idle to running application code. That, after all, is the primary reason that we had folks wanting to jump into containerized workloads.
Let’s come back to this in a moment.
From Docker Engine to OCI
Meanwhile, there were quite a few parties that weren’t so keen on the idea of Swarm mode. Namely, folks supporting the Mesos project and folks supporting the Kubetnetes project. The move to bundle in the orchestration functionality with the engine ignited a tinderbox of dissent and resulted in fragmentation, which is not what was needed drive the container movement significantly forward to widespread support.
Within a very short period of time, there were discussions of forked docker. Returning to just the bits that were needed to take a container image and run it. No need for KV services, no need for the orchestration bits. Docker responded, creating the Open Container Initiative, to govern the creation of a set of standards for container images and runtimes. Why? Because there was a need for this considering a) the defacto standard of the docker image and b) the need to leverage this within other container orchestration environments. Great! Well, not so fast.
In my opinion, this was a divisive moment. Some groups leaning to do anything necessary to remove docker (the engine) from the equation of container use in the context of platforms and others continuing the adoption of docker’s tools, namely developers. So in the process of creating the OCI, which was meant to be a supporting structure, what really ended up happening was forking those supporting Docker and those who were not. This did nothing to support the business equation of adopting containers.
As you’ll see later, this created a big problem that I am hoping Docker is capable of solving with their Kubernetes announcement.
The ascension of Kubernetes
While all of the discussion was taking place with teams over the state; current and future, of the container runtime many people were contributing to the Kubernetes project. During this period, from mid-2015 to mid-2016, there was explosive growth in the number of folks committing to the project and newly announced features and functionality.
Many of these items were gaining traction with the TechOps teams who were being tapped on the shoulder to provide environments to support containerized workloads. Pods, Replication Controllers, and Services are concepts that ops teams can leverage to run containers in a way that makes sense to Google and others running containers at scale, in turn, fueling more advancements and maturity.
Simultaneously all of the major IT software vendors with deep ties to operations budgets announced they were going all in with Kubernetes, including RedHat and VMware. As such, you could say that all ops teams were learning about docker (the tool) and containers from the perspective of using Kubernetes to run container images.
Why do we want to use Docker containers again?
Let’s return back to the main reason that we liked containers, and Swarm from the get-go: It was SUPER easy a developer to create software, test, and deliver an image that would run unaltered anywhere there was a docker engine. This provided us not only one thing dev and ops could agree on (the container image) but also a way to instantiate that image. The docker engine provided the latter functionality with a deployment manifest (a docker-compose file), and Swarm extended that process with a cluster of engines. This promised enhanced speed to the SDLC, and, with the proper continuous pipelines provided new levels of velocity (what we call the IT Supply Chain). Great, right?
Again, not so fast.
Recall, ops teams were thinking Kube was the right way to run containers. That means Kube objects. That’s not what dev teams were thinking (docker-compose, and secrets, networking, etc.). And at this moment the single process to create, deliver, and run container images – and drive business value – seriously slowed.
Why Docker’s announcement is a BIG deal
Let’s bring this all together. There have been a few projects that have aimed to tie the docker dev world to the ops teams thinking Kubernetes. Some of these include Kompose (to convert docker-compose to kube objects), Yipee.io (tool to sit between dev/ops and handle cross-orchestration platform deployment), and minikube (single VM Kuberenetes deployment to run locally on a dev’s machine) to name a few. But in the end, they are largely tools to handle something that shouldn’t be an issue!
Here’s where Docker can step in and get us back on track: Provide an integrated toolchain that gets us back to what we had with a docker-centric workflow.
If we had the ability to use docker-compose (or a similar method) to develop and orchestrate containers on a single docker engine but also on Kubernetes WITHOUT TRANSLATION would be monumental. Combine this with building images locally (something that you can’t do with minikube without hacking it to expose the underlying docker engine), integrated support of their image registries (hub, store or DTR and image scanning) and easy consumption of docker content trust you have a recipe for success.
This would bring us back to the main reason we liked docker to begin with. Remember this image from the early Docker presentations?
I do! We’ll see how all of this comes together, but I for one hope for the sake of using containers to increase development velocity that the good folks at Docker get it right. Time will tell, but I am optimistic.
If you, your team, or company is trying to figure out Kuberenetes, docker images, continuous pipelines, infrastructure as code please reach out. My team at Nebulaworks are pros at these and other DevOps and cloud tools, processes, and up-skilling. We’d be happy to be a partner in your modernization projects.
Categorised in: Kubernetes
This post was written by Chris Ciborowski
December 15, 2015 11:00 am
Have you heard of GIFEE? If not, now you have. If you have, you are ahead of the game.
G = Google
I = Infrastructure
F = For
E = Everyone
E = Else
Yep, that’s right. Google infrastructure for everyone else. No, I’m not the first to coin this term…but we may be the first consulting firm focus on this space, develop expertise, and help clients build and operate these platforms.
Let’s rewind the clock. Five years ago, everyone was quite familiar with how the enterprise would manage their IT environment and deploy applications. Virtualization? Yep, everyone was virtualized. Converged infrastructure? Yeah, have that too. Way easier to manage than a number of different platforms. Centralized computing reincarnated. Maybe even some engineered systems to run big back end services like Oracle. Point being, everyone was comfortable. Vendors fat and happy, and customers were at a stasis point. Times were good. But there were some teams who saw change approaching. Here’s one I got five or six years ago from a large telco, who was still hosting email for their customers: “How do we do what Google does?” And my response was to checkout the research paper on GFS. At the time, we knew so very little about what was really going on…
Categorised in: Kubernetes
This post was written by Chris Ciborowski