Every development team has its own standards for deploying applications. Each of these standards come with challenges and trade-offs. While it seems that no deployment solution is perfect, developers do everything possible to make it easier, seamless, and more effective for the business. One main issue with standard deployment is the team’s response to critical, high-priority bugs. The delay in deployed bug fixes is often due to traditional standards built into the minds of most developers, but the introduction of containers alleviates much of the headaches with deployment frustrations and provides a faster way to respond to revenue-impacting software failures.
Traditional Software Deployments
Before you understand Docker benefits, it’s important to understand traditional deployment procedures. Although most teams have their own custom way to do things, the way deployment happens is similar across most organizations.
When a bug report is documented, a project manager reviews the issue with the stakeholder and relays information to the developer. The PM and developer review the issue, and the developer has to figure out the core problem (also called root cause analysis). Bugs are always prioritized with revenue-impacting ones receiving the most attention, and non-intrusive bugs are pushed to the side until high-priority bugs are fixed. This starts one issue with traditional deployment, because often non-intrusive, low-priority bugs get pushed so far back in a schedule that they are overlooked indefinitely.
Once the source code responsible for the bug is found, the developer can work with the team to figure out a solution. The solution is implemented into the code base, and most teams have a specific day that code is deployed to production. Some teams only deploy once a month, and others deploy once a week. A QA person is there to ensure that the deployment passes a variety of scripted and manual tests, and the result is patched software that should never interrupt user productivity.
This is a simplified version of software deployment to production, but it incorporates the main activities that happen behind the scenes. You probably already see some issues especially in terms of speed and organization among other developers.
To name just a few issues with traditional deployments:
- Even high-priority, revenue impacting bugs can take longer than they need to be. Just a day between a found bug and deployment to production can cost the organization millions, depending on the size of the business.
- Some teams script deployments, but most do not. This results in human error when the person in charge of deployments forgets a step or misses a configuration.
- Keeping track of your codebase and what should be deployed and what shouldn’t or can cause havoc during an emergency deployment.
- A large code base with several applications combined can be difficult to verify and compile when developers are rushed to deploy emergency bug fixes.
The issues with deployment sometimes lead to poor business practices. Developers working in production, changing data in a production database on-the-fly, and skipping the QA process to speed up deployment are a sign that your deployment strategy isn’t working.
Working with Docker to Simplify deployment
Docker is a container solution that redefines the way you deploy applications. That also redefines the way you store and process your applications.
In the previous section, we described a deployment procedure implemented in many large enterprise organizations. The codebase is usually large, and developers have a dedicated person to compile and keep track of code changes. Containerization takes a different approach and separates applications into components. They don’t have a UI component that works with the operating system, but Docker containers run APIs, services, websites and other backend components in their own “space” on a server.
Containers are unique to virtual servers because they run directly on the operating system. As you know with VMs, hypervisor allows you to run any operating system on a local computer regardless of the underlying OS. With containers, the application runs on the particular OS installed on the server. One way to get around this limitation is to install containers on its own VPS, which would then run its own operating system.
Re-engineering large codebases into smaller components reduces much of the technical errors and hurdles of traditional deployment. Take, for instance, the disorganization of a large codebase when a high-priority bug is reported. The codebase must be compiled and each component including the UI, any APIs, and backend database queries must be re-tested by QA before it’s deployed. When a revenue-impacting bug is found, time is of the essence. The extra time to test every component is eliminated with containers because each container is independent of the others.
Instead of a large codebase, you deploy a portion of your applications on a Docker container image. For instance, an API would have its own container. If a bug is reported in another part of your system, it doesn’t affect the API. Any bugs in the API affect only the API, so testing and deployment are done just for the limited code base. The result is more rapid deployments for new features, bugs, and testing. Your development team can also deploy more often since any changes are scripted and contained within its own Docker container.
Containerization doesn’t mean your applications work in a silo. They can still communicate with each other to build platform-wide solutions without the large, bulky code base. The big advantage is that containers are built once and then deployed several times. No matter how you change your code, the container is built the same way each time. This allows you to script and deploy a container in a variety of environments – testing, staging, production, etc.
Docker and .NET Applications
Because the UI does not communicate with the underlying operating system, you can’t deploy desktop applications but Docker supports any .NET website, API, or service that you want in production. Deploying with .NET is a bit different than other platforms because Windows environments use PowerShell, which is Microsoft’s scripting language for operations.
When you work with Docker, you package an application using a file called a Dockerfile. Any DevOps people used to script configurations and deployment files will become familiar with a Dockerfile quickly. It’s a script that copies .NET files and sets configurations for your container. If you’re only used to MSI files and installer GUIs, it might take some time to get used to a purely scripted deployment process. Any application that doesn’t need a UI to work with the operating system but runs on Windows will work in a .NET container. This includes Java, .NET Core, Node.js and Go apps.
The following is an example of a Docker script.
FROM microsoft/aspnet:windowsservercore-10.0.14393.693 SHELL ["powershell"] RUN Remove-Website -Name 'Default Web Site'; \ New-Item -Path 'C:\myWindowsAPi' -Type Directory; \ New-Website -Name 'Sample-App' -PhysicalPath 'C:\myWindowsApi' -Port 80 -Force EXPOSE 80 RUN Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Services\Dnscache\Parameters' \ -Name ServerPriorityTimeLimit -Value 0 -Type DWord COPY CodeDirectory /Sample-App
- Starts Docker with a Windows Server Core image
- Tells Docker to use PowerShell as the shell program
- Removes the default website installed with any Windows OS image
- Opens port 80 (the default webserver port)
- Turns on DNS caching for the container
- Copies the deployment directory (CodeDirectory) to the Sample-App application directory
The above script is simple and only controls the Docker image for one main website container. But any code changes for the web app would not change the way your container is deployed.
After you create a script, you then run the “docker build” command to build the Docker image, which is then able to run in your environment. This example excludes containers for dependencies such as a database server, but you can create a similar script that copies a SQL Server database for another container to use for the website’s backend data.
For organizations with several different software products, redesigning your solutions into Docker containers will eliminate much of the deployment hassles when working with VPS or dedicated servers. Docker containers will:
- Reduce deployment times by using the same image across all deployments
- Eliminate the “runs on my machine” bugs that happen when software moves from a local dev machine to staging or production
- Script configurations specific to the application instead of a physical server or VPS image
- Because deployments are scripted, you can rapidly deploy more quickly rather than wait for a standard time frame.
- Integrate easily in a Microsoft Azure cloud hosting environment
- Isolate applications, so any bugs affect only the local container and not the entire application