3 Must-know Facts About Docker Container Platforms

November 12, 2017 1:58 pm Published by -

Container Platforms FactsNebulaworks has been at the epicenter of containers, including container platforms, for almost four years. Let me provide some context. Our first orchestration platform built using multiple OSS projects went into production long before K8s was v1.0. In fact, at the time nobody had coined “CaaS.” Docker didn’t have a commercial offering, and the docker engine hadn’t been decomposed into the bits that comprise the engine today (runC, containerd). If we were helping you with an AWS-first strategy at the time, our recommendation would have been to use Beanstalk to run your containers. You can say we’ve seen it all, deployed every platform in production, both on-premises and in the public clouds.

And in this time we’ve found the following statements to be overwhelmingly true, regardless of the end user, where an orchestration platform is going to be deployed, or the company looking to deploy the tools:

  1. All of the orchestration platforms are complex engineered systems. Look under the cover of what is needed in a platform, like Kubernetes. You’ve got SDN, a distributed kv store, layers of reverse proxies, telemetry collection, DNS services, etc. You’ll need to at least understand the architecture and how this all works together!
  2. They are all self-contained, meaning, it is tough to extract runtime metadata that is created and using this to connect to external services that are outside of the orchestration platform and manage the consumption of these providers through policy-driven approaches.
  3. No platform has all the answers. No matter if you choose PaaS (deploying your apps as containers) or CaaS, they are opinionated systems, functionally extended largely by the edge cases the OSS contributors are facing. Take a gander at the open and closed issues, PRs and discussions for any of the supporting OSS projects used in the platforms. You’ll learn a lot about why certain decisions were made.

So where does that leave you? If you are just jumping into the container game, you are best served to evaluate the pre-assembled platforms against the value of building your own platform in terms of:

  • Having the appropriate skills and managing the initial build
  • Keeping up-to-date with changes and enhancements to the platform (i.e., mitigating further accumulation of technical debt
  • Providing something to the business an existing open source project can’t, and, at what cost
  • How the platforms integrate with a modern application delivery pipeline

From there, if you choose to roll your own container platform or are thinking of open source first, a sound pattern is emerging. Decompose the functionality these platforms are providing (connectivity, security, and runtime) and choose best of breed that can help enable multi-cloud. For example, you can choose docker, consul, vault, HAproxy, and nomad and build a very functional platform to run containers and non-container processes, with security that is far better than secrets distribution, and is open to brokering in/out of the platform (i.e., extending into existing IT infra/services).

Either way, you should look to experts to help with the choosing, implementation, and integration of these distributed systems. They’re not simple tools with point and click deployments. Even the “easy buttons” require you to make choices, and out of the box, they do not deliver often desired functionality like auto-scaling. And, taking theses from test cases to full production-grade deployments requires additional services and capabilities that, you guessed it, are not included in the container platforms.

To learn more about our approach checkout our Container Platforms page. If you already have one of these tools implemented, but are working towards more mature enterprise integrations, learn about Continuous App and Infrastructure Pipelines.

Categorised in: , ,

This post was written by Chris Ciborowski

Recapturing $2.6 Trillion in IT waste with DevOps

May 10, 2016 11:13 am Published by -

Wasteful spending is all around us. Federal, state, and local government, companies of all sizes, and likely even your own household. Come to think of it, why am I paying for that landline again?  So, the question begs…when evaluating IT spend, do you contemplate new approaches (like DevOps) to help reduce waste?

Wasteful behavior isn’t usually something that anyone sets out to exhibit. Rather, it is a result of maintaining the status quo. Contracts are negotiated, teams are built, technologies deployed. Then we move on to addressing the next challenge. Over time we standardize and create processes in an attempt to keep costs in check and control waste. Unfortunately, many times we are measuring success against historical data, like industry standards for year-over-year discounting. So we press on, spending on items we believe are required albeit at reduced cost.  Success you say?  We’ll get back to that.

In 2014 Gene Kim (co-author of The Phoenix Project) asserted that there is roughly $2.6 trillion in opportunity cost directly related to wasted IT spend. Let’s think about that. By continuing down “Status Quo Avenue”, collectively we’re missing out on opportunities equalling 2.6 trillion dollars. That could be expanding into new markets or garnering additional market share, faster feature release cycles, or even increased operational efficiencies. Assuming that this number is reasonably accurate, how does one go about identifying previously overlooked or endemic waste and use this to capture new opportunities? More specifically, how can DevOps be used as a mechanism to accomplish this or catalyze our efforts?

Read More

Categorised in:

This post was written by Chris Ciborowski

What’s causing slow container and DevOps adoption?

March 8, 2016 8:53 pm Published by -

Analogical Reasoning and DevOps

A couple of weeks ago CIO.com writer Clint Boulton posted an article titled “CIOs aren’t ready for Docker and Container technology“, where he discusses that CIOs are slow to adopt containers and DevOps. It is a pretty good read, providing more substance than other pieces covering emerging technology. In fact, it was largely supported with data derived live from 80 CIOs that were in attendance at a Wall Street Journal event. As an empirical data guy this is a good thing. But I kept finding myself taking issue with the “CIOs aren’t ready” part of the title. At Nebulaworks we are working on DevOps enablement projects daily and containers are a key piece of the puzzle to drive pipeline productivity. So why are CIOs slow to embrace the technology and adopt DevOps?

I went back and reread the article and put myself in the place of a CIO who was sitting in the audience listening to the various speakers, including Docker CEO, Ben Golub. Hearing Ben’s high-level comments, the initial value proposition; i.e., use cases, would be pretty easy to understand for the technical leaders of organizations. But apparently that wasn’t the case. Why? While many explain slow adoption as a function of risk, while others using reasons like regulatory compliance, these and other justifications are the result of:

Analogy.

Read More

Categorised in:

This post was written by Chris Ciborowski

Application Delivery, The Final Frontier: Containers or Platform as a Service?

September 12, 2014 9:44 pm Published by - Leave your thoughts

This is a big question that I have been playing over and over in my head, discussing with colleagues, and customers.  Based on recent developments I felt that it is certainly worth a few blog posts to completely hash out.  Certainly too much to bite off in one post – so let’s set the stage for a series of posts on the subject.

I have been closely tracking the various PaaS offerings for some time, going back to 2010/11 with the release of Heroku and Google App Engine.  While their offerings were (are) certainly interesting I came to the conclusion that betting the farm on public PaaS offerings was probably a bad idea.  Compared to IaaS, which (depending on technologies) you can fairly simply migrate your data, VMs, or worst-case recreate instances from from scratch when moving to a new provider or technology,  PaaS demands taking the plunge into writing your applications and deployment methodologies to operate with a specific set of procedures.  I argue that migrating between PaaS offerings is not cost effective, certainly not as portable as one may believe.  Sure, by adopting early you gain the advantage of reduced time to market, as well as the supporting services you may need which are available to rapidly deploy an application, as a utility; the limitations of public offerings were far too great.  Not to mention that at scale you could be in for major sticker shock.

Migrating and retooling for on-premise application deployment without a platform is time consuming and cost prohibitive.  So I gave up on being able to implement comprehensive PaaS solutions that would ease the enterprise into DevOps…looking towards the future to see what additional maturity in the market would bring. Fast forward a few years and there are a plethora of PaaS vendors.  And with that explosive growth emerged private PaaS solutions like Cloud Foundry and OpenShift – combined with public offerings.  The concept of being able to keep a consistent application delivery and testing loop and methodology across environments – for local (i.e. laptop), public, and private deployments – would be game changing.  Enterprise customers can tightly integrate Development and Operations in a way which allows for scaling, monitoring, and security with the control of an on-premeise deployment, but take advantage of the elasticity of public clouds if necessary – all without doing more than targeting to a different execution environment (kind of over simplified, but you get the picture).  This was a major step forward in the maturity of PaaS and needed to increase adoption rates.

So, we would think PaaS is the way that applications should be delivered.  Right? Not so fast. While PaaS technologies were making their way to market, there were talented folks developing a different approach to deploying applications using containers.  The concept of containerized applications is not new.  In fact, we’ve used FreeBSD jails and Solaris Containers for a long time.  This time, however, some of the largest internet companies were quietly working on ways to modify and update the concept of containers to efficiently deploy applications in production, at scale, with orchestration.  The resulting technologies were their competitive advantage; getting to market quickly and effectively handling massive growth – which legacy technology could not support.  What if this tech (or something very similar) made its way to the public domain, say, via open source?  Woah.  Another game changer.  They figured that you can drop the need of IaaS, as well as PaaS, instead:  write your application and put it and supporting services into containers, stand up cheap Linux servers (disposable), install a container execution environment, some management bits and deploy. Throw in a commercial player, Docker, which provides an complete environment for building, storing, and deploying containers and you have a supportable solution.

Awesome.

We only wish it was that easy! This method takes some pretty serious sysadmin, engineering and development talent who understand all of the components which make this all work cohesively.  Depolying on containers adheres to a different principle too:  developers are closer to the infrastructure rather than further away, as in the case of PaaS.  And you will likely need an orchestration tool like Fig, Mesos, or Kubernetes.  This is very new tech, has new or little commercial support.  Not quite production ready. So, we have PaaS and we have containers, bringing us to the the end of our discussion.  The part where I layout why one is better than the other, which one to choose if you will.  Unfortunately, not yet.  Let me muddy the water just a little more:  Both OpenShift and Cloud Foundry are supporting Docker containers either as either the primary means or valid option of application deployment.  What?  Why build and operate PaaS, with the supporting cast of technologies which are required if you can just deploy an application as a container and link containers together to create a complete deployment?

That is the million dollar question.  Ready for the short answer?  Because.  That’s right.  Some applications and services are well suited to containerization.  Others are far better suited to being run on PaaS.  PaaS could be the best fit from the outset, but the business may require a shift to operate as a pure container environment at scale.  Conversely, an organization may determine that containers are a low barrier to entry for linked deployment but later may choose to have integrated and supported capabilities that PaaS offers, like autoscaling, while being able to support other methods of application deployment. About the long answer…well, that will be the topic of future blog posts.  Stay tuned.  In the meantime, checkout Nebulaworks solutions based on Cloud Foundry, OpenShift, and Docker. Cheers!

Tags: , , , , , , , ,

Categorised in:

This post was written by Chris Ciborowski