Don’t get left behind! The new era of digital transformation of businesses has moved on to new operating models such as containers and cloud orchestration. 

Let’s find out how to get the best of Workload Automation (WA) by deploying the solution on a cloud-native environment such as Amazon Elastic Kubernetes Service (Amazon EKS). 

This type of deployment makes the WA topology implementation 10x easier, 10x faster, and 10x more scalable compared to the same deployment in an on-premises classical platform. ​  

In an Amazon EKS deployment, to best fit the cloud networking needs of your company, you can select the appropriate networking cloud components supported by the WA Helm chart to be used for the server and console components:  

  • Load balancers  
  • Ingresses  

You can also leverage the Grafana monitoring tool to display WA performance data and metrics related to the server and console application servers (WebSphere Application Server Liberty Base). Grafana needs to be installed manually on Amazon EKS to have access to Grafana dashboards. Metrics provide drill-down for the state, health, and performance of your WA deployment and infrastructure. 

In this blog you can discover how to: 

  • Deploy WA components (Server, Agent, Console) in an Amazon EKS cluster, using one of the available cloud network configurations. 
  • Download the Kubernetes job plug-in from the Automation Hub website and configure it in your AWS EKS cloud environment.  
  • Monitor the WA solution from the WA customized Grafana Dashboard. 

 Let’s start by taking a tour!!! 

Deploy WA components (Server, Agent, Console) in an Amazon EKS cluster, using one of the available network configurations 

In this example, we set up the following topology for the WA environment and we configure the use of the ingress network configuration for the server and console components:  

  • 1 server  
  • 2 dynamic agents  
  • 1 console 

Let’s demonstrate how you can roll out the deployment without worrying about the component installation process. 

For more information about the complete procedure, see: 

https://github.com/WorkloadAutomation/hcl-workload-automation-chart OR https://github.com/WorkloadAutomation/ibm-workload-automation-chart/blob/master/README.md 

  1. Create hwa-test namespace for the Workload Automation environment 

2. Add the Helm chart to the repo 

Add the Workload Automation chart to your repo and then pull it on your machine 

3. Customize the Helm Chart values.yaml file 

Extract the package from the HCL or IBM Entitled Registry, as explained in the README file and open the values.yaml Helm chart file. The values.yaml file contains the configurable parameters for the WA components. 

To deploy 2 Agents in the same instance, set the waagent replicaCount parameter to 2 

Snap of replicaCount parameter from the values.yaml file 

Set the Console exposeServiceType as Ingress as follows: 

Snap of console Ingress configuration parameters from the values.yaml file 

Set the server exposeServiceType as Ingress as follows: 

Snap of server Ingress configuration parameters from the values.yaml file 

Save the changes in the values.yaml file and get ready to deploy the WA solution 

4. Deploy the WA environment configuration  

Now it’s time to deploy the configuration. From the directory where the values.yaml file is located, run:  

 

After about ten minutes, the WA environment is deployed and ready to use! 

No other configurations or settings are needed, you can start to get the best of the WA solution in the AWS EKS cluster!   

To work with the WA scheduling and monitoring functions, you can use the console as usual, or take advantage of the composer/conman command lines by accessing the WA master pod. 

To figure out how to get the WA console URL, continue to read this article! 

Workload Automation component pod view from the Kubernetes Manager tool Lens

Install and configure the Automation Hub Kubernetes plug-in 

Let’s start to explore how to install and configure the native Kubernetes jobs on the AWS EKS environment.  

NOTE: These installation steps are also valid for any other Plugin available in the Automation Hub catalog. 

To download the Kubernetes Batch Job Plugin 9.5.0.02 version, go to the following Automation Hub URL: 

 

Workload Automation Kubernetes Batch Job plugin in Automation Hub  

  1. Download the package from Automation Hub and extract it to your machine Download the package from Automation Hub and extract it to your machine

 

2. Copy the JAR file in the DATA_DIR folder of your WA master pod 

From the directory where you extracted the plug-in content, log in to your AWS EKS cluster, and run the command:  

3. Copy the JAR file in the applicationJobPlugin folder 

Access the master pod and copy the Kubernetes JAR file from the /home/wauser/wadata/wa-plugin folder to the applicationJobPlugin folder. 

Copy command from the Server pod terminal

4. Restart the application server  

From the appservertools folder in TWS inst directory run the commands  

Workload Automation application server start/stop commands. 

 

Now the plugin is installed, and you can start creating Kubernetes job definitions from the Dynamic Workload Console.  

5. Create and submit the job  

To access the Dynamic Workload Console, you need the console ingress address. You can find it running the command: 

Kubernetes command to get the list of ingress addresses.  

 

Build up the URL of the console by copying the ingress address, as follows: 

From the Workload Designer create a new Kubernetes job definition: 

Job definition search page from the Dynamic Workload Console. 

Define the name of the job and workstation where the job runs: 

Job definition page of the Dynamic Workload Console. 

On the Connections page, check the connection to your cluster: 

Connection panel of the Kubernetes Batch Job plugin in the Dynamic Workload Console 

From the Run Kubernetes Job page, specify the name of the Kubernetes job yaml file that you have defined on your workstation.  

Kubernetes job configuration page of Workload Automation console 

Now there’s nothing left to do but save the job and submit it to run!!! 

 As expected, the k8s job runs on a new pod deployed under the hwa-test namespace:

Kubernetes Batch job pod view from Kubernetes Manager tool Lens. 

Once the job is done, the pod is automatically terminated.  

Monitor the WA environment through the customized Grafana Dashboard 

Now that your environment is running and you get how to install and use the plugins, you can monitor the health and performance of your WA environment. 

Use the metrics that WA has reserved for you!!! 

To get a list of all the amazing WA metrics  available, see the Metric Monitoring section of the readme: https://github.com/WorkloadAutomation/hcl-workload-automation-chart 

 Log in to the WA custom Grafana Dashboard, and access one of the following available custom metrics: 

List of Workload Automation custom metrics from the Grafana dashboard  

In each section, discover a brand-new way to monitor the health of your environment! 

Workload Automation custom metrics from the Grafana dashboard – Pod resources 

Take a look at the space available for your WA persistent volumes for WA DATA_DIR! 

Workload Automation custom metrics from the Grafana dashboard Disk usage 

 

Full message queues are just an old memory! 

Workload Automation custom metrics from the Grafana dashboard – Message queue 

For an installation process example check this out! 

Learn more about Workload Automation and get in touch with us here! 

AUTHORS: 

Serena Girardini

She is the Verification Test manager for the Workload Automation product in distributed environments. She joined IBM in 2000 as a Tivoli Workload Scheduler developer and she was involved in the product relocation from the San Jose Lab to the Rome Lab during a short-term assignment in San Jose (CA).  For 14 years, Serena gained experience in Tivoli Workload Scheduler distributed product suite as a developer, customer support engineer, tester and information developer. She covered for a long time the role of L3 fix pack release Test Team Leader and, in this period, she was a facilitator during critical situations and upgrade scenarios at customer sitesIn her last 4 years at IBM, she became IBM Cloud Resiliency and Chaos Engineering Test Team Leader. She rejoined HCL in April, 2019 as an expert Tester and she was recognized as Test Leader for the product porting to the most important Cloud offerings in the market. She has a Bachelor Degree in Mathematics. Linkedin: https://www.linkedin.com/in/serenagirardini/ 

Louisa Ientile

Louisa works as an Information Developer planning, designing, developing, and maintaining customer-facing technical documentation and multimedia assets.  Louisa completed her degree at University of Toronto, Canada, and currently lives in Rome, Italy.  Linkedin:  https://www.linkedin.com/in/louisa-ientile 

 

Davide Malpassini

He joined HCL in September 2019 as Technical Lead starting to work on IBM Workload Automation product suite. He has 14 years experience on software development and his activity was the extension of the Workload Automation product from a Kubernetes native environment to OpenShift Container Platform and  REST API for Engine of Workload Automation . He has a Computer Engineering Master Degree. Linkedin: https://www.linkedin.com/in/davide-malpassini-71b25582/ 

 

 Pasquale Peluso

He is a Workload Automation Software Engineer. He joined HCL in September 2019 in the Verification Test team. He works as verification tester for Workload Automation suite on distributed and cloud-native environments. He has a master’s degree in Automation Engineering. Linkedin: https://it.linkedin.com/in/pasqualepeluso 

 

Filippo Sorino

He joined HCL in September 2019 as Junior Software Developer starting to work as Tester for IBM Workload Automation product suite.  He has a computer engineering bachelor’s degree. Linkedin: https://www.linkedin.com/in/filipposorino 

 

Federico Yusteenappar

He joined HCL in September 2019 as Junior Software Developer starting to work as Cloud Developer for IBM Workload Automation product suite. His main activity was the extension of the Workload Automation product from a Kubernetes native environment to OpenShift Container Platform. He has a Computer Engineering Master Degree. Linkedin: www.linkedin.com/in/federicoyusteenappar 

 

 

 

 

 

Comment wrap
Further Reading
Automation | August 28, 2020
Manage your AWS resources by using AWSCloudFormation with Workload Automation
Let us begin with understanding of AWSCloudformation what it is all about before moving to our AWSCloudformation plugin and how it is benefits to our workload automation users.   AWS CloudFormation provides a common language for you to model and provision AWS and third-party application resources in your cloud environment. AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. This gives you a single source of truth for your AWS and third-party resources. Coming to our AWSCloudformation, below diagram summarizes what is our plugin can perform, so our workload customers can make use of this to simplify their infrastructure management as well as easy to implement changes to infrastructure.   To give more clarity on its benefits let us understand with below example,   For a scalable web application that also includes a back-end database, you might use an Auto Scaling group, an Elastic Load Balancing load balancer, and an Amazon Relational Database Service database instance. Normally, you might use each individual service to provision these resources. And after you create the resources, you would have to configure them to work together. All these tasks can add complexity and time before you even get your application up and running.   Instead, you can create or modify an existing AWS CloudFormation template. A template describes all your resources and their properties. When you use that template to create an AWS CloudFormation stack, AWS CloudFormation provisions the Auto Scaling group, load balancer, and database for you. After the stack has been successfully created, your AWS resources are up and running. You can delete the stack just as easily, which deletes all the resources in the stack....
Automation | August 28, 2020
Simplify The Data Loading Using Oracle UCM and HCM Data Loader plugins with Workload Automation
Customers using Oracle Human Resources Cloud face the challenge of continuous bulk loading of large amounts of data at regular intervals. Oracle Human Resources Cloud provides the tools like HCM Data Loader which address this business use case. Now you can automate data loading into Oracle Human Resources cloud using the Oracle UCM and Oracle HCM Data Loader plugins which leverage the HCM Data Loader for the Workload automation users. Business Process automated:       Source: https://docs.oracle.com/en/cloud/saas/human-resources/20a/faihm/introduction-to-hcm-data-loader.html#FAIHM1372446 The above diagram shows the business process automated through these plugins: This process is divided into 2 steps and hence the 2 plugins: A .zip file containing .dat files is placed on the Oracle WebCenter Content server. Here the Oracle WebCenter Content server acts as a staging infrastructure for files that are loaded and processed by the HCM Data Loader. HCM Data Loader imports the data first into its stage tables and then into application tables. Any errors that occur during either the import phase or load phase are reported in the job status and details in job log.   Technical description and workflow Oracle UCM plugin The Oracle UCM enables you to stage data files for processing to HCM Data Loader. It provides easier integration with other business processes by using the Oracle Universal Content Management (UCM) integration. The Oracle UCM Data Loader automates the process of bulk-loading of data. You can load the data files and monitor them from a single point of control. The data is uploaded as zip files to Oracle UCM, which is processed by the HCM Data Loader. This integration helps you save time, resources, and speed up data loading in a secure manner. Prerequisite for the plugins to work: - Oracle Human Resources Cloud service account with correct permissions to access File Import and Export task...
Automation | August 19, 2020
Workload Automation – Customer-centric approach
A customer-centric company is more than a company that offers good customer service. Customer-centric is our HCL Software business philosophy based on putting our customers first and at the core of business in order to provide a positive experience and build long-term relationships. In today’s uncertain world, not even the best contract can capture what will change tomorrow. A contract can only convert to business advantage through a value centric relationship. In the Workload Automation family, we strongly believe in customer collaboration and we have several programs that helps us to nurture relationship with our customers and involve them in the product design and evolution. The Client Advocacy Program is aimed to accelerate customer’s success and to create strategic relationships with HCL’s technical, management and executive leaders. The mission of our Client Advocacy Program is to build a direct relationship with our customers. We really want to be able to hear their voice. The User experience (UX) design in HCL is based on the Design Thinking approach, that relies on users to stay in touch with real-world needs. We work with users to design and build the solution to their needs through continuous participation of the same users in the design process. We really want to bring the user voice in the product design and development.   What this actually means? We take care of the relationship with each customer, no matter of the program. The programs are often just the first engagement: everything can start from a specific request or by pure chance. From the very first meeting with our customer we focus on addressing her/his needs and building trust, no matter if it happens in an Ask the Expert or in a Design Thinking session. We have tons of successful stories that have started from a simple question or even complaint. The entire product team takes care of each customer by looking for the subject matter expert to answer each question. The Customer Advocates often are the first point of contact in the entire organization. They are the customer best buddy; they nurture the relationship with constant...
a/icon/common/search Created with Sketch.