Recapturing $2.6 Trillion in IT waste with DevOps

May 10, 2016 11:13 am Published by -

Wasteful spending is all around us. Federal, state, and local government, companies of all sizes, and likely even your own household. Come to think of it, why am I paying for that landline again?  So, the question begs…when evaluating IT spend, do you contemplate new approaches (like DevOps) to help reduce waste?

Wasteful behavior isn’t usually something that anyone sets out to exhibit. Rather, it is a result of maintaining the status quo. Contracts are negotiated, teams are built, technologies deployed. Then we move on to addressing the next challenge. Over time we standardize and create processes in an attempt to keep costs in check and control waste. Unfortunately, many times we are measuring success against historical data, like industry standards for year-over-year discounting. So we press on, spending on items we believe are required albeit at reduced cost.  Success you say?  We’ll get back to that.

In 2014 Gene Kim (co-author of The Phoenix Project) asserted that there is roughly $2.6 trillion in opportunity cost directly related to wasted IT spend. Let’s think about that. By continuing down “Status Quo Avenue”, collectively we’re missing out on opportunities equalling 2.6 trillion dollars. That could be expanding into new markets or garnering additional market share, faster feature release cycles, or even increased operational efficiencies. Assuming that this number is reasonably accurate, how does one go about identifying previously overlooked or endemic waste and use this to capture new opportunities? More specifically, how can DevOps be used as a mechanism to accomplish this or catalyze our efforts?

Read More

Categorised in:

This post was written by Chris Ciborowski

What’s causing slow container and DevOps adoption?

March 8, 2016 8:53 pm Published by -

Analogical Reasoning and DevOps

A couple of weeks ago CIO.com writer Clint Boulton posted an article titled “CIOs aren’t ready for Docker and Container technology“, where he discusses that CIOs are slow to adopt containers and DevOps. It is a pretty good read, providing more substance than other pieces covering emerging technology. In fact, it was largely supported with data derived live from 80 CIOs that were in attendance at a Wall Street Journal event. As an empirical data guy this is a good thing. But I kept finding myself taking issue with the “CIOs aren’t ready” part of the title. At Nebulaworks we are working on DevOps enablement projects daily and containers are a key piece of the puzzle to drive pipeline productivity. So why are CIOs slow to embrace the technology and adopt DevOps?

I went back and reread the article and put myself in the place of a CIO who was sitting in the audience listening to the various speakers, including Docker CEO, Ben Golub. Hearing Ben’s high-level comments, the initial value proposition; i.e., use cases, would be pretty easy to understand for the technical leaders of organizations. But apparently that wasn’t the case. Why? While many explain slow adoption as a function of risk, while others using reasons like regulatory compliance, these and other justifications are the result of:

Analogy.

Read More

Categorised in:

This post was written by Chris Ciborowski

Application Delivery, The Final Frontier: Containers or Platform as a Service?

September 12, 2014 9:44 pm Published by - Leave your thoughts

This is a big question that I have been playing over and over in my head, discussing with colleagues, and customers.  Based on recent developments I felt that it is certainly worth a few blog posts to completely hash out.  Certainly too much to bite off in one post – so let’s set the stage for a series of posts on the subject.

I have been closely tracking the various PaaS offerings for some time, going back to 2010/11 with the release of Heroku and Google App Engine.  While their offerings were (are) certainly interesting I came to the conclusion that betting the farm on public PaaS offerings was probably a bad idea.  Compared to IaaS, which (depending on technologies) you can fairly simply migrate your data, VMs, or worst-case recreate instances from from scratch when moving to a new provider or technology,  PaaS demands taking the plunge into writing your applications and deployment methodologies to operate with a specific set of procedures.  I argue that migrating between PaaS offerings is not cost effective, certainly not as portable as one may believe.  Sure, by adopting early you gain the advantage of reduced time to market, as well as the supporting services you may need which are available to rapidly deploy an application, as a utility; the limitations of public offerings were far too great.  Not to mention that at scale you could be in for major sticker shock.

Migrating and retooling for on-premise application deployment without a platform is time consuming and cost prohibitive.  So I gave up on being able to implement comprehensive PaaS solutions that would ease the enterprise into DevOps…looking towards the future to see what additional maturity in the market would bring. Fast forward a few years and there are a plethora of PaaS vendors.  And with that explosive growth emerged private PaaS solutions like Cloud Foundry and OpenShift – combined with public offerings.  The concept of being able to keep a consistent application delivery and testing loop and methodology across environments – for local (i.e. laptop), public, and private deployments – would be game changing.  Enterprise customers can tightly integrate Development and Operations in a way which allows for scaling, monitoring, and security with the control of an on-premeise deployment, but take advantage of the elasticity of public clouds if necessary – all without doing more than targeting to a different execution environment (kind of over simplified, but you get the picture).  This was a major step forward in the maturity of PaaS and needed to increase adoption rates.

So, we would think PaaS is the way that applications should be delivered.  Right? Not so fast. While PaaS technologies were making their way to market, there were talented folks developing a different approach to deploying applications using containers.  The concept of containerized applications is not new.  In fact, we’ve used FreeBSD jails and Solaris Containers for a long time.  This time, however, some of the largest internet companies were quietly working on ways to modify and update the concept of containers to efficiently deploy applications in production, at scale, with orchestration.  The resulting technologies were their competitive advantage; getting to market quickly and effectively handling massive growth – which legacy technology could not support.  What if this tech (or something very similar) made its way to the public domain, say, via open source?  Woah.  Another game changer.  They figured that you can drop the need of IaaS, as well as PaaS, instead:  write your application and put it and supporting services into containers, stand up cheap Linux servers (disposable), install a container execution environment, some management bits and deploy. Throw in a commercial player, Docker, which provides an complete environment for building, storing, and deploying containers and you have a supportable solution.

Awesome.

We only wish it was that easy! This method takes some pretty serious sysadmin, engineering and development talent who understand all of the components which make this all work cohesively.  Depolying on containers adheres to a different principle too:  developers are closer to the infrastructure rather than further away, as in the case of PaaS.  And you will likely need an orchestration tool like Fig, Mesos, or Kubernetes.  This is very new tech, has new or little commercial support.  Not quite production ready. So, we have PaaS and we have containers, bringing us to the the end of our discussion.  The part where I layout why one is better than the other, which one to choose if you will.  Unfortunately, not yet.  Let me muddy the water just a little more:  Both OpenShift and Cloud Foundry are supporting Docker containers either as either the primary means or valid option of application deployment.  What?  Why build and operate PaaS, with the supporting cast of technologies which are required if you can just deploy an application as a container and link containers together to create a complete deployment?

That is the million dollar question.  Ready for the short answer?  Because.  That’s right.  Some applications and services are well suited to containerization.  Others are far better suited to being run on PaaS.  PaaS could be the best fit from the outset, but the business may require a shift to operate as a pure container environment at scale.  Conversely, an organization may determine that containers are a low barrier to entry for linked deployment but later may choose to have integrated and supported capabilities that PaaS offers, like autoscaling, while being able to support other methods of application deployment. About the long answer…well, that will be the topic of future blog posts.  Stay tuned.  In the meantime, checkout Nebulaworks solutions based on Cloud Foundry, OpenShift, and Docker. Cheers!

Tags: , , , , , , , ,

Categorised in:

This post was written by Chris Ciborowski