Kubernetes
In the previous article, I discussed some of the drivers and benefits of moving to containerised and cloud-native workloads. Now that you’ve massively reduced the footprint your applications require, you need a way to manage, secure and scale your container-based applications, which is where kubernetes comes in. There’s an abundance of material available online about kubernetes and I’d prefer the main focus of these articles to be on IBM Workload Scheduler, so if you’re new to k8s, or want a refresher, I’d recommend IBM’s “What Is Kubernetes” introduction series:- https://ibm.co/3iCmLHV
The changing role of the IWS administrator
I’ve been working with IBM’s enterprise systems management and automation tools for about 20 years and from an IBM Workload Scheduler admin perspective, I’ve seen the base product evolve a lot from when it was just called Maestro, which was my first experiences with it.
From the introduction of Tivoli Management Framework for authenticating users of the Job Scheduling Console (JSC), to the switch to WebSphere Application Server (WAS) combined with the ability to use a ‘real’ RDBMS like Db2 or Oracle, meant us admins had to pick up a whole new set of skills on top of our IWS skills.
While it’s an unrealistic expectation for IWS admins to have been able to deploy and manage a Tivoli Management Region, or latterly, troubleshoot an in-depth WAS issue or deploy a HADR-enabled Db2 database, it will definitely make their lives easier when working with the teams who do handle these environments if they have an understanding of how IBM Workload Scheduler interacts with the resources it requires from these other products.
In the Tivoli Management Framework days, TWS was just one of many applications that installed products into a Tivoli Management Environment and the TMR controlled access to the JSC using roles and permissions. You didn’t have to be a Tivoli Management Framework expert to configure the TMR-TWS integration, but it’s likely the Tivoli guys didn’t really know what TWS was, or needed, from their Tivoli Administrator perspective, and you’d have to work together to set up the authentication correctly. I can see a lot of similarities with this approach moving forward when an IWS administrator wants to have IWS installed on Red Hat OpenShift. They may not need to know a lot about how OpenShift works, but the more they know, the more it will help.
A lot of IBM’s products that I’ve been working with over the years are now offered as both traditional on-prem installations or as containers. A deployment of Netcool Operations Insight which combines multiple components of Netcool Omnibus, Impact, Db2, Log Analysis etc) can now be deployed in a fraction of the time it would take to deploy the stack over multiple machines using Installation Manager. There are that many moving parts that need to be installed and configured in exactly the correct order across multiple machines that you’re looking at several days of intensely following documentation and checklists to get it all running.
With on-prem installations, the same applies for IBM Workload Scheduler. The Masters, DWCs and agents will likely be installed across several machines and you may also have Development, Test and Prod environments too which come with the same maintenance headaches I talked about in the previous article.
In the next article, I will show how quickly it is to stand up or expand IWS environments running on Red Hat OpenShift.
Hi Mark, would you know when part 3 of the post will come out?