Can you have your desired number of Workload Automation (WA) agent, server and console instances running whenever and wherever? Yes, you can 

Starting from the Workload Automation version 9.5 fix pack 2, you can deploy the server, console and dynamic agent by using Openshift 4.2 or later version platforms. This kind of deployment makes Workload Automation topology implementation 10x faster and 10x more scalable compared to the same deployment in the on-prem classical platform. 

Workload Automation provides you an effortless way to create, update and maintain both the installed WA operator instance and WA component instances by leveraging also the Operators feature introduced by Red Hat starting from OCP 4.1 version. 

In this blog, we address the following use cases by using the new deployment method: 

  • Download and Install WA operator and WA component images 
  • Scheduling – Scale UP and DOWN the WA instances number  
  • Monitoring WA resources to improve Quality of Service by using OCP dashboard console 

Download and Install WA operator and WA components images 

WA operator and pod instances prerequisites 

Before starting the WA deployment, ensure your environment meets the required prerequisites. For more information, see https://www.ibm.com/support/knowledgecenter/SSGSPN_9.5.0/com.ibm.tivoli.itws.doc_9.5/README_OpenShift.html 

Download WA operator and components images 

Download the Workload Automation Operator and product component images from the appropriate web site. Once deployed, the Operator can be used to install the Workload Automation components and manage the deployment thereafter. 

The following are the available packages: 

Flexera (HCL version):  

  • 9.5.0-HCL-IWS_OpenShift_Server_UI_Agent_FP0002.zip containing the images for all HCL components. Download this package to install either all or select components (agent, server, console). 
  • 9.5.0-HCL-IWS_OpenShift_Agent_FP0002 containing the HCL agent image 

Fixcentral (IBM version) 

  • 9.5.0-IBM-IWS_OpenShift_Server_UI_Agent_FP0002.zip containing the images for all IBM components. Download this package to install either all or select components (agent, server, console). 
  • 9.5.0-IBM-IWS_OpenShift_Agent_FP0002 containing the IBM agent image 

Each operator package has the following structure  (keep it in mind, it can be useful for the steps we are going to see later): 

The README file in the Operator package has the same content of the url previously provided in the prerequisites section. 

Once the WA Operator images have been downloaded, you can go further by downloading the Workload automation component images. 

In this article, we will demonstrate what happens when you download and install the IBM version of the WA operator and how it can be used to deploy the server (master domain manager), console (Dynamical Workload Console), and dynamic agent components. 

Deploy the WA Global Operator in OCP
Now we focus on the creation of the Operator to be used to install the Workload Automation server, console and agent components. 

NOTE: 

Before starting the deploy, proceed as follows: 

  • Push the server, console and agent images to your private registry reached by OCP cloud or to the Internal OCP registry. 
  • On an external VM reached by the OCP cloud, install the relational DB needed for the server and/or console persistent data storing. 

## Building and pushing the Operator images to a private registry reachable by OCP cloud or Internal OCP registry. 

To generate and publish the Operator images by using the Docker command line, run the following commands: 

    docker build -t <repository_url>/IBM-workload-automation-operator:9.5.0.02 -f build/Dockerfile . 

    docker push <repository_url>/IBM-workload-automation-operator:9.5.0.02 

where <repository_url> is your private registry reachable by OCP or Internal OCP registry. 

Otherwise, if you want to use Podman and Buildah command line, see the related commands to use in the README file. 

## Deploying IBM Workload Scheduler Operators by using the OpenShift command line: 

Before deploying the IBM Workload Scheduler components, you need to perform some configuration steps for the WA_IBM-workload-automation Operator: 

  • create workload-automation dedicated project using the OPC command line, as follows: 

oc new-project workload-automation 

  • create the WA_ IBM-workload-automation operator service account: 

     oc create -f deploy/WA_ IBM-workload-automation-operator_service_account.yaml 

  • create the WA_ IBM-workload-automation operator role: 

oc create -f deploy/ WA_ IBM-workload-automation-operator_role.yaml 

  • create the WA_ IBM-workload-automation operator role binding: 

oc create -f deploy/ WA_ IBM-workload-automation-operator_role_binding.yaml  

  • create the WA_ IBM-workload-automation operator custom resources definition: 

oc create -f deploy/crds/ WA_ IBM-workload-automation-operator_custome_resource_definition.yaml 

## Installing the operator 

We do not have the Operator Lifecycle Manager (OLM) installed, so we performed the following steps to install and configure 

 the operator: 

  1. In the Operator structure that we have shown you before, open the `deploy` folder.
  2. Open the ` WA_ IBM-workload-automation_operator.yaml` file in a flat text editor.
  3. Replace every occurrence of the `REPLACE_IMAGE` string with the following string: `<repository_url>/<wa-package>-operator:9.5.0.02` where <repository_url> is the repository you can select before to push images.
  4. Finally, install the operator, by running the following command: 

      oc create -f deploy_ IBM-workload-automation _operator.yaml 

Note:  

If you have the Operator Lifecycle Manager (OLM) installed, see how to configure the Operator in the README file. 

Now we have the WA operator installed in our OCP 4.4 cloud environment, as you can see in the following picture: 

Fig 1. OCP dashboard – Installed Operators view 

##Deploy the WA server, console and agent components instances in OCP 

  1. Select the installed WA Operator and go to the YAML section to set the parameter values that you need to install the server, console, and agent instances: 
  2. Choose the components to be deployed, by setting them on true: 

3. In this article, we are going to deploy all components, so we set all values on true.Set the number of pod replicas to be deployed for each component: 

In this example you leave the default. In this article, we decided to set replicaCount to 2 for each component. 

4. Accept the license by setting it to accept. 

 After all changes have been performed, go to the WA Operator section and select Create WorkloadAutomationProd”, 

Fig 2. OCP dashboard – Installed Operators – IBM Workload Automation Operator instance view 

When the action is completed, you can see the number of running WA pods that is the same one selected in the YAML file 

for server, console and agent components: 

Fig 3. OCP dashboard –Workload – Running pods for workload-automation project. 

Scheduling – Scale UP and DOWN instances number  

Thanks to the Operator feature, you can decide to scale up or down each component just simply going to the installed WA operator, and modifying the “replicaCount” value number in the YAML file related to the instance you previously created. 

Saving the change performed on the YAML file, the Operator automatically updates the number of instances accordingly to the value you set for each component. 

In this article we show you how we scaled up the wa-agent instances from 2 to 3, by increasing the replicaCount value, as you can see in the following picture: 

 

Fig 4. OCP dashboard Modify Installed Operators YAML file for workload-automation project. 

After a simple “Save action”, you can immediately see the updated number of running pod instances, as you can see in the following picture: 

Fig 5. OCP dashboard –Workload – Pods view for workload-automation-jade project. 

Note:  

You can repeat the same scenario for the master domain manager and Dynamic Workload Console. The elastic scheduling makes the deployment implementation 10x faster and 10x more scalable also for the main components. 

Monitoring WA resources to improve Quality of Service by using OCP dashboard console 

Last but not least, you can monitor the WA resources by using OCP native dashboard or going in drill-down on the Grafana dashboard. In this way, you can understand the resource usage of each WA component, collect resource usage and performance across WA resources to correlate implication of usage to performance, and scale up or down WA component number to improve overall components throughput and improve Quality of Service (QoS).  

So, you can understand if the number of WA instances you deployed can support your daily scheduling, otherwise you can increase the instances number. Furthermore, you can understand if you need to adequate the WA console instances number to support the simultaneous multiple users access, that is already empowered by the load balance action provided by OCP cloud 

In our example, after having scaled up the replicaCount to 3, we realized that 2 instances were sufficient to have a good performance in our daily scheduling. Thus, we decreased the instance to 2 to not exceed the available resource quotas. 

The following pictures show you the scaling down from 3 to 2 instances: 

Fig 6. OCP dashboard (workload automation namespace) 

The following picture shows a drill-down on the Grafana dashboard of a defined range of time in which we scaled down from 3 to 2 server instances. 

Fig 7. Grafana dashboard – CPU Usage and quota for wa-waserver instance in workload automation namespace 

Fig 8. Grafana dashboard – Memory Usage and Quota for wa-waserver instance in workload automation namespace 

Fig 9. Grafana dashboard – Network Usage and Receive bandwith  for wa-waserver instance in workload automation namespace 

Fig 10. Grafana dashboard – Network Average Container Bandwith by pod -Received and Transmitted for wa-waserver instance in workload automation namespace 

Author Bio

Serena Girardini, Workload Automation Test Technical Leader 

Serena Girardini is the System Verification Test Team leader for the Workload Automation product in distributed environments. She joined IBM in 2000 as a Tivoli Workload Scheduler developer and she was involved in the product relocation from San Jose Lab to Rome Lab during a short term assignement in San Jose (CA).  For 14 years, Serena gained experience in Tivoli Workload Scheduler distributed product suite as developer, customer support engineer, tester and information developer. She covered for a long time the role of L3 fixpack releases Test Team Leader and in this period she was a facilitator during critical situations and upgrade scenarios at customer site. In her last 4 years at IBM she became IBM Cloud Resiliency and Chaos Engineering Test Team Leader. She joined HCL in April, 2019 as expert Tester for IBM Workload Automation product suite and she was recognized as Test Leader for the product porting to the most important Cloud offerings in the market. She has a math bachelor degree. 

Linkedin: https://www.linkedin.com/in/serenagirardini/ 

Federico Yusteenappar, Workload Automation Junior Software Developer 

He joined HCL in September 2019 as Junior Software Developer starting to work as Cloud Developer for IBM Workload Automation product suite. My main activity was the extension of the Workload Automation product from a Kubernetes native environment to OpenShift Container Platform. He has a Computer Engineering master degree.   

Linkedin: www.linkedin.com/in/federicoyusteenappar 

 

 

 

Comment wrap
Further Reading
Automation | August 28, 2020
Manage your AWS resources by using AWSCloudFormation with Workload Automation
Let us begin with understanding of AWSCloudformation what it is all about before moving to our AWSCloudformation plugin and how it is benefits to our workload automation users.   AWS CloudFormation provides a common language for you to model and provision AWS and third-party application resources in your cloud environment. AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. This gives you a single source of truth for your AWS and third-party resources. Coming to our AWSCloudformation, below diagram summarizes what is our plugin can perform, so our workload customers can make use of this to simplify their infrastructure management as well as easy to implement changes to infrastructure.   To give more clarity on its benefits let us understand with below example,   For a scalable web application that also includes a back-end database, you might use an Auto Scaling group, an Elastic Load Balancing load balancer, and an Amazon Relational Database Service database instance. Normally, you might use each individual service to provision these resources. And after you create the resources, you would have to configure them to work together. All these tasks can add complexity and time before you even get your application up and running.   Instead, you can create or modify an existing AWS CloudFormation template. A template describes all your resources and their properties. When you use that template to create an AWS CloudFormation stack, AWS CloudFormation provisions the Auto Scaling group, load balancer, and database for you. After the stack has been successfully created, your AWS resources are up and running. You can delete the stack just as easily, which deletes all the resources in the stack....
Automation | August 28, 2020
Simplify The Data Loading Using Oracle UCM and HCM Data Loader plugins with Workload Automation
Customers using Oracle Human Resources Cloud face the challenge of continuous bulk loading of large amounts of data at regular intervals. Oracle Human Resources Cloud provides the tools like HCM Data Loader which address this business use case. Now you can automate data loading into Oracle Human Resources cloud using the Oracle UCM and Oracle HCM Data Loader plugins which leverage the HCM Data Loader for the Workload automation users. Business Process automated:       Source: https://docs.oracle.com/en/cloud/saas/human-resources/20a/faihm/introduction-to-hcm-data-loader.html#FAIHM1372446 The above diagram shows the business process automated through these plugins: This process is divided into 2 steps and hence the 2 plugins: A .zip file containing .dat files is placed on the Oracle WebCenter Content server. Here the Oracle WebCenter Content server acts as a staging infrastructure for files that are loaded and processed by the HCM Data Loader. HCM Data Loader imports the data first into its stage tables and then into application tables. Any errors that occur during either the import phase or load phase are reported in the job status and details in job log.   Technical description and workflow Oracle UCM plugin The Oracle UCM enables you to stage data files for processing to HCM Data Loader. It provides easier integration with other business processes by using the Oracle Universal Content Management (UCM) integration. The Oracle UCM Data Loader automates the process of bulk-loading of data. You can load the data files and monitor them from a single point of control. The data is uploaded as zip files to Oracle UCM, which is processed by the HCM Data Loader. This integration helps you save time, resources, and speed up data loading in a secure manner. Prerequisite for the plugins to work: - Oracle Human Resources Cloud service account with correct permissions to access File Import and Export task...
Automation | August 19, 2020
Workload Automation – Customer-centric approach
A customer-centric company is more than a company that offers good customer service. Customer-centric is our HCL Software business philosophy based on putting our customers first and at the core of business in order to provide a positive experience and build long-term relationships. In today’s uncertain world, not even the best contract can capture what will change tomorrow. A contract can only convert to business advantage through a value centric relationship. In the Workload Automation family, we strongly believe in customer collaboration and we have several programs that helps us to nurture relationship with our customers and involve them in the product design and evolution. The Client Advocacy Program is aimed to accelerate customer’s success and to create strategic relationships with HCL’s technical, management and executive leaders. The mission of our Client Advocacy Program is to build a direct relationship with our customers. We really want to be able to hear their voice. The User experience (UX) design in HCL is based on the Design Thinking approach, that relies on users to stay in touch with real-world needs. We work with users to design and build the solution to their needs through continuous participation of the same users in the design process. We really want to bring the user voice in the product design and development.   What this actually means? We take care of the relationship with each customer, no matter of the program. The programs are often just the first engagement: everything can start from a specific request or by pure chance. From the very first meeting with our customer we focus on addressing her/his needs and building trust, no matter if it happens in an Ask the Expert or in a Design Thinking session. We have tons of successful stories that have started from a simple question or even complaint. The entire product team takes care of each customer by looking for the subject matter expert to answer each question. The Customer Advocates often are the first point of contact in the entire organization. They are the customer best buddy; they nurture the relationship with constant...
a/icon/common/search Created with Sketch.