September 18, 2014 1:00 pm
Wow. Talk about timing. Sometimes, when things line up it is almost like there was divine intervention. Well, that is if you believe in divinity. And in the technology world there is not to much akin to there being a god…unless you are Larry Ellison and you believe yourself to be divine. I digress.
Let’s talk about why I am excited first. There are many reasons, most of which being is that I have been there. No, not from the angle of running an organization which raised some serious VC (congrats to the Docker team, maybe me someday), but rather, from the perspective of sysadmin and team lead facing the challenges which Docker is aiming to solve. Just thinking of the time which I spent preparing for application upgrades, being “on call” (another phrase really meaning you better get your ass on the bridge) to support the developers and DBAs as new rollouts were going live – recounting the countless hours under fluorescent lights freezing my ass off (yes, I had to be in the data center), and walking in a datacenter at dusk and out a day later at noon, not knowing what time it exactly was – makes me cringe. OK, it makes me cry. What if there was a better way to do what countless organizations have done since what seems like going back to the beginning of time? There must be a better way.
Fast forward a decade, maybe a little longer. Seems like over the last ten or so years we have cobbled together ways to make the deployment of applications better. Still, not the optimal way of doing things, but better. Figuring out how to package up apps from the development environment, troubleshooting in test, and hoping that rollout into production is seamless – and that our scripts made sure the operating environment and supporting infrastructure was in fact ready for roll out. Maybe implement SOA (along with all of its standards). And of course I’ve always been a fan of automation. Which sysadmin isn’t? Spend the time to think about all of the various outcomes and develop the logic to handle the situations. Thank you Chef, Puppet, Salt, etc. for making that easier. But it still doesn’t make it dead simple. Like develop; push to test, test; push to production. Not needing to worry (so much) about all of the underlying requirements and the differences between environments. With Platform as a Service we are getting there and we at Nebulaworks have jumped feet first into helping organization make sense of the concepts. We think it is going to be big. But, what about a less complex alternative? Maybe even something tool or process that allows for a phased approach. Time to circle back to Docker.
Take the functionality available in a Linux kernel, add a library, and run multiple isolated applications. Ok, been there done that (see my previous blog post). But where this gets cool is providing a repository for the applications and services. Build your container including your app, add required services to support the bits, and deploy. That is the a-ha moment. Don’t want all of that in a container? Cool…create or pull down a different container with the supporting service and link them together. Let’s do one better – use a layered file system so that we can easily track changes between iterations of that container. Whaaatt? Iterative approach that is visible to not only the developers but also the sysadmins? And add to that – all we need to setup dev, test, and prod is simply install Linux or spin up a VM/instance and manage the underlying capacity? Fantastic. And there are tangible benefits to the organization besides the cool factor: easier manageability, greater density of workloads, faster deployments. Oh, and we haven’t even discussed solving for portability, which we have done so as well.
So about that perfect match. Nebulaworks was founded on bringing creative, cost-effective, cloud solutions to organizations to help them gain agility and innovate. We are engineers with a passion for doing this with our customers. Docker, is looking to let developers and sysadmin build, ship, and run, any app, anywhere. I believe this allows organizations to address challenges creatively and it certainly has the benefit of being cost effective. And it also enables agility. Yes, that is just about a perfect match.
News about our Docker partnership: Nebulaworks Signs as a Docker System Integrator Partner. And for more information about container solutions using Docker and other technologies, visit Nebulaworks Container Platforms.
Until next time.
Tags: Containers, Docker, News, Partnerships
Categorised in: News and Events
This post was written by Chris Ciborowski
September 12, 2014 9:44 pm
This is a big question that I have been playing over and over in my head, discussing with colleagues, and customers. Based on recent developments I felt that it is certainly worth a few blog posts to completely hash out. Certainly too much to bite off in one post – so let’s set the stage for a series of posts on the subject.
I have been closely tracking the various PaaS offerings for some time, going back to 2010/11 with the release of Heroku and Google App Engine. While their offerings were (are) certainly interesting I came to the conclusion that betting the farm on public PaaS offerings was probably a bad idea. Compared to IaaS, which (depending on technologies) you can fairly simply migrate your data, VMs, or worst-case recreate instances from from scratch when moving to a new provider or technology, PaaS demands taking the plunge into writing your applications and deployment methodologies to operate with a specific set of procedures. I argue that migrating between PaaS offerings is not cost effective, certainly not as portable as one may believe. Sure, by adopting early you gain the advantage of reduced time to market, as well as the supporting services you may need which are available to rapidly deploy an application, as a utility; the limitations of public offerings were far too great. Not to mention that at scale you could be in for major sticker shock.
Migrating and retooling for on-premise application deployment without a platform is time consuming and cost prohibitive. So I gave up on being able to implement comprehensive PaaS solutions that would ease the enterprise into DevOps…looking towards the future to see what additional maturity in the market would bring. Fast forward a few years and there are a plethora of PaaS vendors. And with that explosive growth emerged private PaaS solutions like Cloud Foundry and OpenShift – combined with public offerings. The concept of being able to keep a consistent application delivery and testing loop and methodology across environments – for local (i.e. laptop), public, and private deployments – would be game changing. Enterprise customers can tightly integrate Development and Operations in a way which allows for scaling, monitoring, and security with the control of an on-premeise deployment, but take advantage of the elasticity of public clouds if necessary – all without doing more than targeting to a different execution environment (kind of over simplified, but you get the picture). This was a major step forward in the maturity of PaaS and needed to increase adoption rates.
So, we would think PaaS is the way that applications should be delivered. Right? Not so fast. While PaaS technologies were making their way to market, there were talented folks developing a different approach to deploying applications using containers. The concept of containerized applications is not new. In fact, we’ve used FreeBSD jails and Solaris Containers for a long time. This time, however, some of the largest internet companies were quietly working on ways to modify and update the concept of containers to efficiently deploy applications in production, at scale, with orchestration. The resulting technologies were their competitive advantage; getting to market quickly and effectively handling massive growth – which legacy technology could not support. What if this tech (or something very similar) made its way to the public domain, say, via open source? Woah. Another game changer. They figured that you can drop the need of IaaS, as well as PaaS, instead: write your application and put it and supporting services into containers, stand up cheap Linux servers (disposable), install a container execution environment, some management bits and deploy. Throw in a commercial player, Docker, which provides an complete environment for building, storing, and deploying containers and you have a supportable solution.
We only wish it was that easy! This method takes some pretty serious sysadmin, engineering and development talent who understand all of the components which make this all work cohesively. Depolying on containers adheres to a different principle too: developers are closer to the infrastructure rather than further away, as in the case of PaaS. And you will likely need an orchestration tool like Fig, Mesos, or Kubernetes. This is very new tech, has new or little commercial support. Not quite production ready. So, we have PaaS and we have containers, bringing us to the the end of our discussion. The part where I layout why one is better than the other, which one to choose if you will. Unfortunately, not yet. Let me muddy the water just a little more: Both OpenShift and Cloud Foundry are supporting Docker containers either as either the primary means or valid option of application deployment. What? Why build and operate PaaS, with the supporting cast of technologies which are required if you can just deploy an application as a container and link containers together to create a complete deployment?
That is the million dollar question. Ready for the short answer? Because. That’s right. Some applications and services are well suited to containerization. Others are far better suited to being run on PaaS. PaaS could be the best fit from the outset, but the business may require a shift to operate as a pure container environment at scale. Conversely, an organization may determine that containers are a low barrier to entry for linked deployment but later may choose to have integrated and supported capabilities that PaaS offers, like autoscaling, while being able to support other methods of application deployment. About the long answer…well, that will be the topic of future blog posts. Stay tuned. In the meantime, checkout Nebulaworks solutions based on Cloud Foundry, OpenShift, and Docker. Cheers!
Tags: Application Development, Cloud Computing, Cloud Foundry, Containers, DevOps, Docker, IaaS, OpenShift, PaaS
Categorised in: DevOps
This post was written by Chris Ciborowski