So, lets get started. The drive to containerised software and cloud-native applications has been revolutionising the IT workplace in recent years. It’s hard to believe kubernetes (k8s) is merely 5 years old or so, yet you would be hard-pressed to find any large company or cloud provider that has not already dedicated resources to move to containers and this is only set to increase.

Source: https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

So why is this? The above diagram describes traditional, virtualised and now, container-based deployments of applications. An IT function’s goal was always to have their applications, whatever they may be, to run in the most efficient way possible, continually re-evaluating and realising cost-savings on hardware, operating systems and their maintenance using the best methods available to them.

In a traditional, older deployment, servers often only had one purpose which was to host a single application. Without the concept of virtualisation, this approach was inherently inefficient requiring racks of physical machines with all their maintenance, security and cooling requirements. Each server would have it own’s operating system that required patching and reboots. Multiple machines, multiple things to maintain multiple technical staff having different skillsets, multiple things that could go wring.

Virtualisation has been around a lot longer than most people realise, but with the rise of VMWare, and later Microsoft’s Hyper-V, the efficiencies of hosting applications in your datacentre went through the roof. Now you could carve up those physical servers of yours into their own separate virtual machines. You still had the overheads of maintaining physical machines and operating systems to contend with, but getting away from single-purpose application servers meant you could get rid of large numbers of expensive, older servers due to the increased density per-physical server that virtualisation gives you.

The container-based approach is to reduce the elements that make up an application down to their component parts. Each container should perform only a single function within the operation of the application and it should include everything the container needs to run without needing to rely on the operating system for libraries files or binaries. Each container is self-contained and should be designed to run anywhere.

The above is a common, real-world scenario. A project is created to migrate IBM Workload Scheduler to a new set of servers. The teams provisioning the servers and deploying the operating systems to them are different. They may well be third-party companies outside of your own organisation. The filesystems are setup wrong – they’re too small! You asked for AIX 7.2 TL2 and you got AIX 7.2 TL4. Oh, and AIX 7.2 TL3 on a couple of them due to the length of time it took to get all of them provisioned.

So now you have differences in your new, shiny IWS estate, which is something you really don’t want. You want to find any issues in your test IWS environment. You don’t want to see these in pre-prod and you absolutely cannot have them in production. If the underlying servers and operating systems differ in any way, particularly at the OS level, you cannot hand-on-heart say test will work the same as pre-prod will work the same as production.

With containers, the above headaches go away. The containers are built identically and will work exactly the same across all 3 environments.

As the containers contain only the things they need to run, the footprint needed to run them, both in memory and CPU requirements are greatly reduced, which leads to a greater density of applications that can run per server. More servers can be decommissioned, more savings are realised.

Your company is probably already way down the path of getting rid of servers and moving to a cloud provider already.

In the next article I will discuss the changing role of the IBM Workload Scheduler Administrator.

IBM Workload Scheduler 9.5 on OpenShift – Part 2