Route 152, California — Lawrence Manickam

Container Challenges

Lawrence Manickam
6 min readJan 20, 2020

The explosion of Containers made IT forgot they serve the Business once again.

Though designing and implementing IT systems have become simpler with Cloud Computing, Containers created a different form of complexity. Most organizations struggle to set directions for DevOps and Containers.

Container adoption pose challenges to Governance, Regulation and other non-technical IT roles such as procurement, billing and legal in addition to technology complexities. In this article, I discuss the challenges and also provide high-level solutions to manage them.

Stand-alone Containers

Stand-alone Containers are real. Recently I worked on 3 containerized COTS products which don’t support Kubernetes.

Kubernetes does enable replicas, volume management, provide an IP through POD, load balancing, scheduling and does other twenty nice things to Containers. As I mentioned in one of my Docker trending posts, Container is a bullet and Kubernetes is the gun. When running containers stand-alone without Kubernetes, you have to reinvent the gun on your own. It is complex, resource intensive and create endless debates among teams.

A stand-alone container should be treated as any other conventional application with more care. It needs a backup/recovery strategy (A stable OCI Container Registry), external load balancer, storage management, network and a monitoring infrastructure. A thorough Infrastructure planning, design and POC should be required to stand them up.

The industry will see Stand-alone Docker containers for many years to come. Be prepared. Keep a play book and strategy in place.

How to receive Containers from the Vendor?

Receiving Software from the Vendor was easy few years back. The engineer logged into the vendor website to download the required software or used the installer CD from the vendor with the registration key (Highly Trusted).

Containerized COTS products introduced new set of Governance and Technology factors to receive the software from the Vendor.

  • Is it OK to pull the Container from the vendor Public Docker Hub? What is the best possible way to pull the Container Image when the leadership disapproved Public Docker Hub access?
  • My container registry exists inside the corporate network. Will the management allow me to mirror it with the Public Docker Hub?
  • What if I simply get the Dockerfile from the vendor and build the container at our DevOps CI/CD Pipeline?
  • Will the corporate security allow me to connect to the Internet to run the Dockerfile that has several public components?
  • Where I store the Dockerfile? What is the versioning strategy?

Endless questions with no directions.

A software vendor recommended to store their COTS image at our local repository for a sensitive application. I objected the idea due to versioning and security reasons. The DevOps engineer may fail to get the updated version of the Container for future software updates or someone at the CI/CD may tamper the image by mistake.

The core problem in DevOps is ‘ The team troubleshoot something that doesn’t add any value to the business.’

It is important for your organization to define a COTS Container Strategy with some degree of Governance flexibility. Don’t stick to your guns.

Manage multiple Container Registries

MultiCloud at organizations happened either accidentally or planned. Accidental MultiCloud may happen for several reasons such as legal, early adoption to a specific Cloud and ignorance.

Either way, the adoption introduces multiple Container Registries at the corporate network and it is painful. I know a customer who has custom built container registry at AWS, Azure Container Registry, IBM Pivotal Harbor and Red Hat Quay. 4 unique Container registries at one organization. You can visualize their pain. They didn’t plan for this havoc. It’s the result of no Governance.

The worst case scenario here is every Container Registry introduces a small or large form of DevOps CI/CD pipeline in their environment.

It is important for you to define an OCI compliant Enterprise Container Registry before it gets too late. Build your own with all NFR’s or use Cloud provider PaaS Container registries such as Azure Container Registry. Have a backup (mirrored registry) at your environment or some other Cloud for DR.

Build or Pull

Several organizations block access to the Public Docker Hub. The reasoning behind the decision is understandable. However how you will be going to use standard containers such as Apache Tomcat, MongoDB, and MySQL etc.?

It is not wise to build them on your own. It is tedious, error prone and put you back at conventional versioning path.

Mirroring is the key. It allows your Docker Daemons to point to your local Container registry to pull images and the version change at the Public Docker Hub will be synced into your local Container registry automatically. It also saves your docker daemon systems from generating unnecessary internet traffic. A container vulnerability scanning tool such as Twistlock can be used to scan the containers for viruses, malware etc.

Do not build standard containers on your own. Instead pull them using the above strategy.

Container Trust

Containers introduced several third-party components into the software build line. It downloads OS libraries, software binaries, third party tools to build a Container. With the use of third parties comes the question “Do I trust the source?”.

You want to be sure the container given by the creator is what you get from the container registry.

For containers, the method to address the issue of trust is to allow DevOps engineers to sign the container when they build. Duly implemented digital signatures can assure users that the container they’re pulling is the same container that the original DevOps engineer built.

DCT (Docker Container Trust) and other third-party vendor tools helps with your container signing process.

OCI containers

The Open Container Initiative (OCI) provides an open source technical community and standard body to build a vendor-neutral, portable, open specification and runtime.

OCI Container format specification mission;

  • Provide a container format and runtime specification that enables portability across compliant runtimes.
  • Provide a robust stand-alone runtime that can directly consume the specification and run a container.

Public Cloud providers such as Azure, Google and AWS gives OCI compliant registries with Docker Image Manifest. Though ‘Dockerless Containers’ are tempting, note Docker is still leading the market and most of the vendors provides their COTS products with Docker.

Red Hat is pushing OCI container build and management tools such as Buildah and Podman to remove their dependency with Docker. However, it will take years for them to get a sustainable maturity.

Though Dockerless Containers will lead the industry in the future, you should also have a strategy to accommodate the Docker Containers.

Run Linux Containers in Windows OS

Don’t do this.

The complexity of Hyper-V, Moby VM, Nano Containers with Docker Daemon will bring you down. You may end up troubleshooting something irrelevant that will affect the value proposition.

Run Windows Containers in Linux OS

I see few COTS products comes up with this model. A Linux container size is small because it shares the Kernel of the host Linux operating system. The architecture of MS Windows is different, meaning you have to pack the 100% of Windows OS into a Docker Container to operate. A VM inside the Container. The footprint is big, and build time will increase.

It introduces nested virtualization, emulator, product licensing and few special configurations on a Linux host. Use your due-diligence to manage the type of COTS products and environment.

DevOps CI/CD

Jenkins is the de-facto standard for DevOps CI/CD. Organizations uses it extensively not only building their code but also IaC (Infrastructure as Code) and Containers. The Container build CI/CD pipeline must have the Review process for IaC, highly available integration points with Git, Artifactory, Container Registries and the target deployment systems.

Monitoring

Focus on monitoring the service inside the container rather than Container. Containers are ephemeral and the PID of the Container can be monitored using various ways. You have to monitor the end points and uninterrupted application deployment processes. In the next few years, we will stop monitoring IT systems all together and will focus on monitoring business processes.

Docker still holds 89% of the Container market share. It is important to have a socialized Enterprise DevOps practices to host different type of Containers with its unique implementation requirements at your organization.

Lawrence Manickam is the Founder of Kuberiter Inc, a Seattle based Start-up that provide Enterprise/SaaS DevOps Services (Kubernetes, Docker, Helm, Istio and CyberArk Conjur) for MultiCloud.

Please subscribe at www.kuberiter.com to try our DevOps SaaS Services.

--

--

No responses yet