Docker is a partitioning capability within the address space of an operating environment. By allowing the partition to use the host OS directly, even though that OS resides outside of the partition (known as a container), the start-up time is substantially reduced, as is the resource requirements for the management of the container (those of you who are familiar with z/OS will find this concept to be ?somewhat familiar?).
Financial people love this because the cost of acquiring licenses for the operating environment can be substantially reduced since, theoretically, every component that is not part of the application itself can reside outside of the container. This means only one Windows license needs to be procured versus one per VM (which is the required process if Docker is not used).
The Concept Is Simple, But How Does It Work?
Essentially, a special file (called a dockerfile) contains one or more instructions on how a container is to be created. The dockerfile is used as part of a process to generate the container on the file system, which can contain as little as a single application and its associated binaries. This container (a subdirectory in the file system) is then transferred to the target environment as any set of files would be and is started there using the Docker run time, which can be invoked via the command line interface or an API (typically REST based, but there are other implementations).
System Administrators love this because containers are easy to deploy (XCOPY anyone?) and maintain (REST interfaces can be easily integrated into any modern Infrastructure Management platform).
The Requirements of an Enterprise-Class Release Automation Solution
Unfortunately, this concept falls down when people try to use it as a substitute for true application release management. More specifically, we can describe application release management using five of the six question words that everyone learned in high school English:
Who: Not just anyone in an organization should be able to deploy an application to an environment. In fact, even for those allowed to do so, there should frequently be others who have to approve the deployment decision.
What: For organizations that truly embrace the concept of business agility, deploying a complete application every time is unacceptable. Artifacts deemed as low risk (e.g. content updates) may be deployed immediately while higher risk artifacts will be queued up to be released after a lot of testing and other validations. Docker falls into this category but has limitations, which will be touched on below.
Where: The target environment of a deployment is frequently different from every other possible target environment that an application will touch during its journey from development to production. These differences are typically addressed by making changes to the configuration of the application after it has been deployed.
When: Release windows are not a new concept. Even in non-production environments, a case for establishing a release window could be made since environments are often shared among multiple teams within the same function or even across functions (i.e. testing and development may use the
same environment).
How: Probably the most problematic process to fully integrate into an organization?s operational capabilities, the process of deploying an application is far more than simply understanding how to install and configure it. For example, integration with an ITSM application to ensure that change requests have been entered and are in the correct state has to be incorporated into the process of deployment so that the state of the operating environment is always well understood. This is discussed in more detail below.
Of the five question words above, Docker only addresses one of them, and not in the most effective manner possible. Consider the scenario of a well-known bank based in Europe. They currently have in excess of a thousand production releases every month. This was accomplished by recognizing that not all production releases are high risk. In the example under What, it was noted that certain types of artifacts had minimal impact. As a result, the release of those artifact types could be expedited, which helped ensure that this bank?s customer facing assets were always meeting the needs of their clientele.
If they were using Docker, however, the entire application would need to be rebuilt regardless of the types of artifacts that were actually approved for production release. The risk that unapproved binaries could be released into production is simply unacceptable for most companies. And this is only for one of the five items above - Docker does nothing to address the other four.
Application Release Management Is More Than the Application
It is tempting to think of application release management in terms of the application only, while forgetting that the application, from the business?s perspective, is part of the bigger picture. In the How section above, ITSM was mentioned, but this is not the only technology with which the release process must integrate. In fact, the SDLC toolchain is littered with a whole host of solutions that fit specific needs: Hudson and Jenkins for Continuous Integration; Git and Subversion for Source Code Management; Nexus and Artifactory for Artifact Management; Chef and Puppet for Configuration Management; etc.
Additionally, the process of releasing an application during its entire lifetime often includes governance that is specific to the process but isn?t part of the process, per se. However, these stages through which the build must traverse are essential to ensuring minimal risk while releasing with a high cadence. They include approvals, validations, and other types of activities.
Automation Is the Key to Everything
Everything we?ve spoken about is critical to an application release, but, in the end, results are what matter. End users need new functionality, and the speed at which the application development team can both produce the new functionality and deliver it to the end users determines how quickly that new functionality will translate to additional revenue.
Furthermore, repeatability in the process ensures a much higher rate of application deployment success. Conversely, failed deployments cost your company money while production instances of your application are down during triage and remediation. Two studies by major analyst firms in the past 3 years determined that the cost amongst Fortune 1000 companies for application outages that were due to change, configuration, or other handoff related issues was in the range
of $200k-400k per hour.
Each of the tools in the previous section has relevance within only a small portion of the application build and release process. Similarly, Docker addresses the management of the artifacts associated with application release in such a way that it eases the deployment of those artifacts, but that?s it. The coordination of these and other solutions? capabilities is something that must be managed by an orchestration solution, specifically one that was purpose built for application release automation.
Summary
To summarize, Docker is an exciting technology that should be viewed as simply another mechanism that exists within the greater whole of the application release cycle. But it should not be viewed as a replacement for a well-defined methodology that not only includes the ?what,? but also includes ?who,? ?where,? ?when,? and ?how.?
Investing in an enterprise-class automation solution built specifically to automate the release of your mission-critical applications will not only increase the speed with which you deploy your applications but will also increase the rate of your company?s digital transformation as more applications are deployed using the solution, providing dividends for years to come.
This article is featured in the new DZone Guide to Orchestrating and Deploying Containers. Get your free copy for more insightful articles, industry statistics, and more!