A Practical Overview of Containerization

Mar 10, 2023
INFRASTRUCTURE AND CLOUD TECHNOLOGY SERVICES | 3 min READ
    
Archimedes once said, “Give me a place to stand, and a lever long enough, and I will move the world (Earth).”
Rakish Poulose
Rakish Poulose

Associate Vice President

Birlasoft

 
While that is technically possible, finding a “place to stand” has not turned into a practical possibility yet. It’s a similar story with Containerization. It’s not practical to move the entire IT world from VMs to containers. A more practical approach that will bring in more business benefits is to identify specific target workloads and focus on those instead of trying to containerize all the workloads in a datacenter.
Why containerize in the first place? When the IT world turned from Waterfall to Agile methodology for app development and moved from Monoliths to Microservices based apps, instead of running these microservices in heavy VMs, it was inevitable to switch to a lightweight alternative – Containers.
Stay Ahead
Visit our Infrastructure & Cloud Technology page
This model became a catalyst for some of the upcoming technologies like IoT and Data Analytics. DevOps methodologies and tools that enabled continuous integration (CI) and continuous delivery (CD) fast tracked the adoption of containers. CI/CD pipelines automated the workflow across various stages of app development – check-in and integrate code changes, validate, build, package, test and finally deploy the new release in production.
Containers are lightweight since it does not include an entire Operating System (OS) image and leverages the underlying host OS instead. It contains only the application and the dependent libraries. That enables various applications to run in containers side by side on the same host without conflicting with each other, even different versions of the same application, with its own set of libraries packaged in the container. This architecture makes it very easy to port containers from one host to another in a Data Center (DC) or Public Cloud as long as the underlying OS is compatible. Since most containers used by businesses are Linux based, OS compatibility has not been a challenge. Windows containers are comparatively newer, and we will talk about that a little later.
Another development that happened over the past few years was with popular hypervisors integrating container engine/runtimes into the hypervisor layer itself, making it possible to provision and deploy containerized applications without creating servers. Similarly, Public Cloud Service providers like AWS, GCP and Azure also launched serverless compute platforms where a user can provision and deploy apps in containers without provisioning servers first. This drastically beings down the time required to deploy new apps.
Introduction To Intelligent Factory
Windows containers is an option for containerizing legacy (windows based) monolithic applications. This is a useful strategy to free up and decommission legacy hardware systems that are out of support and/or due for a refresh. Though Windows container images are significantly larger than their Linux counterparts, moving monoliths to Windows containers helps to create a standard monitoring and support system along with other containerized applications.
To sum it up, based on the rate of adoption of containers and the supporting ecosystem that is developing very fast, it won’t be very long before serverless containers on private and public clouds becomes the default choice to deploy applications. In the next update, we’ll discuss container orchestration and governance.
 
 
Was this article helpful?
Recommended
 
A Case for Cloud Spend Management
A Case for Cloud Spend Management
Infrastructure & Cloud Technology | 4 min Read
Empowering Remote Collaboration While Reducing IT Workloads
Empowering Remote Collaboration While Reducing IT Workloads
Infrastructure & Cloud Technology | 5 min Read