Your Privacy

This site uses cookies to enhance your browsing experience and deliver personalized content. By continuing to use this site, you consent to our use of cookies.
COOKIE POLICY

Skip to main content

Docker Brings Scale at Cost

Docker Brings Scale at Cost
Back to insights

Our UDig Software Engineering team has always been a cloudfirst engineering group, but while inventorying our internal application and cloud usage we were disappointed at our spend and over-usage of EC2 instances with AWS. Through research and experience we knew if we could containerize our applications with Docker, we’d then be able to consolidate and maximize our servers output. Before we go any deeper let’s look at why this is the case. 

First off, you’ve likely heard before from teams leveraging Docker that it is much faster than virtual machines (VM) or other setups. While this may be true for some applications, what I think users are anecdotally recognizing is the efficiencies of Docker and how it can allow applications to use more of the horsepower their servers carry. One of the key reasons here is unlike virtual machines running on a piece of hardware with fixed CPU and memory limits, Docker doesn’t need to know these things per se. Docker is smart enough to automatically allow the containers who need the most horsepower to leverage more when they need it and containers not in use or between runs to release and idle using little to no resources on the server. This fundamental paradigm shift in virtualization is one of the many benefits of Docker and containerized applications. 

So back to our UDig story and our journey to use Docker and containerize all our internal applications, services and batch jobs. The first part of our process was to catalog all our applications / capabilities and ensure that Docker was a viable home for them to live and run. Upon completion of this exercise we had a list of applications mapped with any call outs that were of concern or required further investigation. Fortunately for us all our applications could make the transition. Only a couple needed some adjustments to optimize them for running within containers. With this information in hand we were able to quickly map out an approach and begin the process of setting up proper container repos, CI/CD pipeline builds and ultimately deploying out applications to their new home. With this effort, we took a series of applications (10 in total) and consolidated them from across 6 instances to a single instance. The new single instance was setup with more horsepower, but the cost of this single instance was less than half our spend for running the various multiple servers. All in all, our total spend on AWS was decreased by half. 

Understanding how Docker can leverage your hardware more efficiently than VMs is easier to understand once you review the diagram below. By simply removing the Guest Operating System from the equation we allow the Docker framework to handle all sub-process (applications) with maximum efficiency. Containers share the entire hardware’s resources with the administrator’s ability to prioritize containers over one another to ensure the most critical services are available and responsive when you need them most. This brings up another point, resiliency. By building containers with a health check, Docker can also automatically respawn hung or fail processes allowing for developers to handle auto recovery scenario and more. 

leveraging docker

Stack comparison between Virtual Machines (left) and Docker (right) 

Benefits of Docker

Docker simplifies your deployment strategies through the ease of: 

Automation 

Docker allows for the use of many administrative tools. Pick a flavor but we’ll recommend Kubernetes. The power of Kubernetes allows for remote administration of Docker clusters, nodes, networks, volumes, containers and more. Basically, everything within the Docker eco-system can be managed by a Kubernetes admin. Another huge benefit is Kubernetes Control Platforms such as KublrPortainer and GCP Kubernetes Engine. These platforms allow for quick creation of clusters within your favorite cloud provider to enable almost instant access to a production deployable environment. 

Scalability 

Docker has tremendous scaling abilities both at the container resource level, in the number of container instances per node and even across clusters. This means your administrators can fine tune environments to provide maximum benefit to users and clients. 

Concurrency 

As mentioned above in scalability, being able to run containers across clusters (read geographic regions) we can enable fail safe measures if any one node fails in a cluster. 

Recovery 

If it hasn’t become clear yet, Docker’s ability to automate all aspects of an environment allows for rapid recovery strategies when multi region outages occur. Moreover, the ability to leverage health checks and event default restart parameters for each container / service ensures total control over how your environments handle anything from hung processes to regional power outages. 

Now What?

If we haven’t sold you on Docker yet then we may never, but for those that are wondering how to get started, it starts with a proper application assessment / readiness exercise. With this effort we’re looking to identify the environment requirements for each application, whether it is Java, Python, PHP or another application, we want to ensure Docker can handle the framework. The good news here, apart from .NET standard, which requires a Windows only image, almost every major web technology can run within Docker. This is great news and another reason we’re seeing such high adoption / usage rates with Docker. 

Choosing to leverage Docker is a cultural technology change. One we think can benefit any sized organization through its simplicity, ease of management and superior enterprise enablements. Our Software Engineering teams build and deploy critical business applications leveraging Docker all the time. If you’re struggling to embrace this new world of hosting and DevOps, we have professionals capable of demystifying the process and helping you modernize applications with minimal ramp. 

About Andrew Duncan

Andrew Duncan is a Director of Software Engineering in Richmond, VA. He is a driven technologist focused on modern technology stacks and best practices. Andrew believes nothing is more rewarding than making software needs a reality with a focus on flexible, scalable and supportable code.

Digging In

  • Development & Modernization

    Keeping Infrastructure out of the Way of Application Goals

    Every new application starts with an idea. Whether creating a new site, the next big social media platform or just trying to make a process workflow easier, the idea is the driving force where teams want to spend most of their time perfecting the user experience. However, once a project moves out of the proof of […]

  • Development & Modernization

    Is Lift-and-Shift a Valid Option for Cloud Migration?

    There are a few valid cases for doing a “lift and shift” cloud migration, but it should always be challenged, especially when it is used as a stepping stone in your long-term strategy. Legitimate time, technology, or capacity constraints within existing data centers are valid reasons to begin with a “lift and shift” migration. If there is true necessity for an organization to meet a certain deadline […]

  • Development & Modernization

    Minimize your Cloud Debt

    Technical debt is a real liability. If you have spent any time working at a company that relies on technology, you have most likely felt the impacts of technical debt. It comes in many forms and is difficult to identify in many cases, but one thing is for certain – it has compounding costs. As […]

  • Development & Modernization

    Architectural Simplicity

    With all of the cool modern frameworks, database options and other technologies available to us it’s easier than ever for software solutions to become overly complex. Sometimes this is a reflection of the complicated problems we are solving. However, in many cases it is the result of poor planning or adding functionality without enough consideration […]

  • Development & Modernization

    Understanding Serverless Computing

    Most consumers think of cloud computing simply as off-site storage space. But there’s been a rising trend in the last few years to host and run code in the cloud. As the features have matured, it’s become simple to use serverless computing services for applications. Advantages The primary advantage to serverless computing is simplicity. The […]