June 19, 2015 1:26 pm
Exciting times. Very exciting times. And yes, pun intended. 🙂
It is hard to believe that a year has passed since the last DockerCon took place in San Francisco. At the time, we were a brand new company, just getting started in the business of helping companies deploy applications, better. We were a sponsor at the Cloud Foundry Summit, and while there we kept hearing buzz about this small conference and a technology called Docker. I’d say more talk about Docker than CF during the three days we were in SF, no doubt.
Since we had already been working with microservices and distributed application architectures deployed on PaaS, we were already familiar with why Docker would be ground breaking. In fact, I had experience with similar technologies going back to BSD and Solaris (albeit from an infrastructure perspective) so we immediately saw the business value (whereas PaaS was a bit more difficult to explain, but that is another discussion). As a company, we started on our path to help customers understand the benefit of containerized application delivery, working with Docker and the container ecosystem at large. Remember that giddy feeling when you worked with a new tool that made your life easier? Yeah, that’s how we felt, something as a team we hadn’t experienced for a long time.
Categorised in: Conferences
This post was written by Chris Ciborowski
June 17, 2015 10:11 pm
Here is part 2 of Got Docker…What about Orchestration? In part one, we covered what an Orchestration Framework is and the major components which comprise them. In case you missed it, the post can be found here.
List of Orchestration Frameworks
Now that we have a primer out of the way, let’s get into a few of the frameworks which are available at this time, who is developing them, a few strengths and where we see them playing. This is not meant to be an all inclusive list, rather, just a few of the most popular options. Also, there are many, many, features…
Docker Swarm (https://docs.docker.com/swarm/) is arguably one of the most simple frameworks available today. Right now, it is in beta. Functionality is being added quickly, however, some key items are still missing. Number one is software defined networking. Today, you cannot schedule workloads across docker hosts and link them together, seamlessly, out of the box. It ships with a service registry and supports some of the most popular registries such as etcd, consul, and zookeeper.
Where Swarm shines is the integration with the standard Docker command line tools. Once the Swarm cluster is created, launching a container on the cluster utilizes the standard Docker commands and the same holds true for managing the container. It also supports some of the functionality of Docker Compose, allowing the turn-up of an entire application stack with one file and one command: docker-compose up.
Swarm is by far the easiest tool to setup in order to have an orchestrated framework for running containerized applications. However, due to the lack of some key tooling (distributed networking) it is not ready for large-scale deployments. That said, for smaller environments consisting of a handful of Docker hosts, Swarm may be a good choice. We typically recommend Swarm for clients with a few Docker hosts, and recommend an outside service registry and proxy service.
Next in our list of frameworks is Kubernetes (http://kubernetes.io). Developed by Google and loosely based on their large-scale container management tool Borg (https://research.google.com/pubs/pub43438.html) it is primarily aimed at deploying and managing containers at scale. Also in beta, new functionality is being added quickly with a very large number of contributors to the project. In fact, it is one of the fastest growing projects on GitHub.
Kubernetes is full featured, including networking abstraction, service registry (etcd), proxy services, and a command line tool. You could say, that today it – with exception of being young, is well on its way to solving for many of the production challenges. The goals of Kubernetes are to be:
- lean: lightweight, simple, accessible
- portable: public, private, hybrid, multi cloud
- extensible: modular, pluggable, hookable, composable
- self-healing: auto-placement, auto-restart, auto-replication
With the decades of running containers at scale, Google has already added and addressed some of the features which have not been introduced in other frameworks. Both Services and Replication Controllers are two of these, which manage the lifecycle and health of your containers (deployed as pods) as well as name services which also acts as a load balancer.
Being beta, we typically do not recommend production rollouts of Kubernetes, unless the company we are speaking to has significant experience with Linux and HPC. However, it does have network abstraction solved for today, that is a big plus. We feel that it is a solution for companies with many Docker hosts but it can scale down to smaller implementations. There is a steep learning curve and there are a few companies are working on developing an easy to deploy distribution. It is also is a technology being used in OpenShift (Red Hat PaaS) as well as CoreOS Tectonic.
Triton is brought to us by the folks at Joyent (https://www.joyent.com). It too is fairly new to the scene, still in beta but about to go GA in the near term. But there are some interesting things about Triton that make it vastly different than some of the other frameworks.
Triton is the first Docker container orchestration framework to not use Docker Engine as the runtime to execute containers. Instead, it is based on SmartDataCetner, which in turn is based on SmartOS, which is based on Illumos, which in turn was based on OpenSolaris. So, Triton has a long history of running containers – which are called Zones in the Solaris world. Having that deep history many of the issues that the current crop of Linux-based frameworks are still trying to solve for have been addressed. But this was not possible until the folks at Joyent resurrected LX branded Zones which allow Linux executables to run on SmartOS unmodified. Once this translation layer was completed, the puzzle came together, with off-the-shelf resolutions to networking and security. Solaris had long resolved these issues, allowing for unique IP addresses for single containers, as well as securing all processes belonging to a container from others running on the same host kernel.
But this greatness comes with a caveat: You are not running a Docker Engine and libcontainer. While the normal Docker client works with Triton not all the commands and functionality is available. Joyent is close to Docker, but keeping parity with the API could be an issue if you want the latest and greatest functionality. But, what it misses there, you gain with two other Solaris technologies which are game changing: DTrace and ZFS. Both of those are killer and worth their own blog posts in the future.
CoreOS (http://www.coreos.com) is an interesting set of tools and they provide a complete framework solution. While they initially started out as a pure Linux play, the company has moved towards supporting far more than just Docker containers. Late last year, CoreOS decided to launch their own container format to address concerns with the security and overall direction of Docker. What is more interesting, is that while they have made it quite clear that they are not 100% aligned with the Docker approach, they are undoubtably supporting Docker.
CoreOS combines a number of stand-alone services which together to provide the orchestration functionality. These include:
- CoreOS: Operating System
- etcd: Consensus and Discovery
- Rocket/Docker: Container Runtime
- Fleet: Distributed init
- Flannel: Networking
One of the big benefits from using a complete solution from one vendor is that they will make sure that all of the components work together as one unit. I am sure that the CoreOS folks are working towards that goal. In addition, the CoreOS approach brings some interesting capabilities to bear, which are either not addressed with the other solutions or require the use of multiple tools from different developers (i.e. support paths). First, with the fundamental building block of CoreOS, updates are a breeze. Using the public Update Service (or CoreUpdate for subscribers) users can utilize a dual-partition process which updates a dual root scheme, allowing easy deployment of updates as well as providing a safe roll back process. Cool. Second, every CoreOS host runs etcd. This provides native service discovery which allows the launch of containers across hosts and provide automatic configuration to applications, services (like HAproxy), and enable multi-cloud deployments easily. And the final component which brings everything together is fleet. By creating a compute cluster from CoreOS hosts which resemble a single init system, containers can be scheduled across the cluster considering not only workloads but also affinity and constraints.
Mesos and Mesosphere DCOS
Mesos (http://mesos.apache.org) and the commercially developed product Mesosphere DCOS (https://mesosphere.com) are aimed at solving a bit of a different problem in the datacenter. While certainly an orchestration framework, the power of these tools is to allow users to see an entire datacenter as a single kernel. Basically, one big collection of CPU, memory, disk, and network which can have workloads (I’ll get back to this in a second) scheduled on it much like a process is scheduled on your laptop.
The Mesos framework provides the ability to run not only containers, but also short running, long running, and other applications across a single cluster. Instead of having a dedicated cluster for Hadoop workloads and another for Docker containers, you could have a single cluster with all of the workloads running simultaneously on the same nodes. This drives up utilization and overall efficiency of the data center. Taking this a step further, Mesos and Mesosphere DCOS expose the underlying scheduler so that you can install other services to utilize the cluster in ways which may not be considered today. Out of the box, Mesosphere DCOS supports Cassandra, HDFS, Kubernetes (what?), Kafka, and Spark. They also support two other services – Marathon, the init system for long running processes and Chronos, a distributed cron system for short running processes.
Here’s where Mesos and Mesosphere DCOS gets interesting with Docker: Kubernetes runs on top of Mesos or DCOS! What does that mean? You can address multi tenancy issues by running multiple K8s environments, or, run an updated K8s environment next to the current production environment for testing. Cool. Oh, and if K8s needs to scale, Mesos/DCOS handles that for you, scheduling workloads across additional nodes if necessary.
And that brings me to scaling. This is something that happens automatically, handled by the data center kernel. Need more capacity? Scale out with one command. Need to scale an application? Scale out with one command (automatically added to the load balancer too). And if you want to test that scale, there are both a traffic simulator as well as a failure simulator (chaos) which allow you to test, in real time, the effects of load or loss of services.
I hope you enjoyed the posts and they were valuable as a primer. As I mentioned before, they were not meant to be an exhaustive list of all the orchestration frameworks which are available, nor were they meant to provide a detailed list of every technology and component which included with each. These are complex pieces of software and there is much more under the cover with each of these…and as such what works for one organization may not be a fit for another.
I feel that it is quite an exciting period that we are just beginning, where any company can take advantage of the tools which were developed to solve the unique challenges of distributed computing and distributed application delivery – especially at scale. This is what we do daily at Nebulaworks, helping companies deliver software better, by helping you to understand and implement the tools and services available to achieve that goal. If you need help with these or would like more information, please reach out, we’d love to help.
Categorised in: Docker
This post was written by Chris Ciborowski