This is a big question that I have been playing over and over in my head, discussing with colleagues, and customers. Based on recent developments I felt that it is certainly worth a few blog posts to completely hash out. Certainly too much to bite off in one post – so let’s set the stage for a series of posts on the subject.
I have been closely tracking the various PaaS offerings for some time, going back to 2010/11 with the release of Heroku and Google App Engine. While their offerings were (are) certainly interesting I came to the conclusion that betting the farm on public PaaS offerings was probably a bad idea. Compared to IaaS, which (depending on technologies) you can fairly simply migrate your data, VMs, or worst-case recreate instances from from scratch when moving to a new provider or technology, PaaS demands taking the plunge into writing your applications and deployment methodologies to operate with a specific set of procedures. I argue that migrating between PaaS offerings is not cost effective, certainly not as portable as one may believe. Sure, by adopting early you gain the advantage of reduced time to market, as well as the supporting services you may need which are available to rapidly deploy an application, as a utility; the limitations of public offerings were far too great. Not to mention that at scale you could be in for major sticker shock.
Migrating and retooling for on-premise application deployment without a platform is time consuming and cost prohibitive. So I gave up on being able to implement comprehensive PaaS solutions that would ease the enterprise into DevOps…looking towards the future to see what additional maturity in the market would bring. Fast forward a few years and there are a plethora of PaaS vendors. And with that explosive growth emerged private PaaS solutions like Cloud Foundry and OpenShift – combined with public offerings. The concept of being able to keep a consistent application delivery and testing loop and methodology across environments – for local (i.e. laptop), public, and private deployments – would be game changing. Enterprise customers can tightly integrate Development and Operations in a way which allows for scaling, monitoring, and security with the control of an on-premeise deployment, but take advantage of the elasticity of public clouds if necessary – all without doing more than targeting to a different execution environment (kind of over simplified, but you get the picture). This was a major step forward in the maturity of PaaS and needed to increase adoption rates.
So, we would think PaaS is the way that applications should be delivered. Right? Not so fast. While PaaS technologies were making their way to market, there were talented folks developing a different approach to deploying applications using containers. The concept of containerized applications is not new. In fact, we’ve used FreeBSD jails and Solaris Containers for a long time. This time, however, some of the largest internet companies were quietly working on ways to modify and update the concept of containers to efficiently deploy applications in production, at scale, with orchestration. The resulting technologies were their competitive advantage; getting to market quickly and effectively handling massive growth – which legacy technology could not support. What if this tech (or something very similar) made its way to the public domain, say, via open source? Woah. Another game changer. They figured that you can drop the need of IaaS, as well as PaaS, instead: write your application and put it and supporting services into containers, stand up cheap Linux servers (disposable), install a container execution environment, some management bits and deploy. Throw in a commercial player, Docker, which provides an complete environment for building, storing, and deploying containers and you have a supportable solution.
We only wish it was that easy! This method takes some pretty serious sysadmin, engineering and development talent who understand all of the components which make this all work cohesively. Depolying on containers adheres to a different principle too: developers are closer to the infrastructure rather than further away, as in the case of PaaS. And you will likely need an orchestration tool like Fig, Mesos, or Kubernetes. This is very new tech, has new or little commercial support. Not quite production ready. So, we have PaaS and we have containers, bringing us to the the end of our discussion. The part where I layout why one is better than the other, which one to choose if you will. Unfortunately, not yet. Let me muddy the water just a little more: Both OpenShift and Cloud Foundry are supporting Docker containers either as either the primary means or valid option of application deployment. What? Why build and operate PaaS, with the supporting cast of technologies which are required if you can just deploy an application as a container and link containers together to create a complete deployment?
That is the million dollar question. Ready for the short answer? Because. That’s right. Some applications and services are well suited to containerization. Others are far better suited to being run on PaaS. PaaS could be the best fit from the outset, but the business may require a shift to operate as a pure container environment at scale. Conversely, an organization may determine that containers are a low barrier to entry for linked deployment but later may choose to have integrated and supported capabilities that PaaS offers, like autoscaling, while being able to support other methods of application deployment. About the long answer…well, that will be the topic of future blog posts. Stay tuned. In the meantime, checkout Nebulaworks solutions based on Cloud Foundry, OpenShift, and Docker. Cheers!Tags: Application Development, Cloud Computing, Cloud Foundry, Containers, DevOps, Docker, IaaS, OpenShift, PaaS