Workload Automation
Complete visibility and control over workloads
Automation | February 9, 2021
Automate Anything, Run Anywhere, starting from the Workload Automation Test environment!
Automate anything, Run Anywhere is our motto!What better place to start making it real than from our very own Workload Automation test environment?We have been applying it in our Test Lab for Workload Automation version 9.5 since its general availability in 2019, and to all the subsequent fix packs, including the last one, fix pack 3.
Automation | February 1, 2021
How to move your Workload Automation on-premises deployment to cloud
To improve productivity and to save IT management costs and time, our customers are starting to move to the cloud environment and are moving heavy workloads from a set of on-premises machines to a resilient cloud infrastructure. So, our principal mission is to facilitate, secure, and support moving a workload automation deployment, from an on-premises infrastructure (minimum WA 95 FP3 version required) to any cloud infrastructure where Workload Automation (WA) is already supported.Link for the video guide: https://youtu.be/7AQHgCnpqLc
Automation | January 14, 2021
Safeguarding Carryforwards during a Migration to WA 9.5
A question which everyone would have in mind while Upgrading to WA 9.5 is to manage all the Carryforwards that are present in old Production Plan on the older Version of the Master and to migrate them to the newer Master Server . This Blog aims to sort this problem once and for all to ensure seamless transition to IWS 9.5 without any hassles .As you would already know if you are reading this Blog that WA 9.5 comes with a whole set of New Features and most noticeable Change Architecturally is to move to Liberty as a Middleware in place of Websphere Application Server, JazzSM for both the Engine as well the DWC Profile.
Automation | January 14, 2021
Manage your Azure Resource by using Azure Resource Manager with Workload Automation
Let us begin with understanding of Azure what it is all about before moving to our Azure Resource Manager plugin and how it benefits our workload automation users. Azure is incredibly flexible, and allows you to use multiple languages, frameworks, and tools to create the customised applications that you need. As a platform, it also allows you to scale applications up with unlimited servers and storage.
Automation, Innovations | December 22, 2020
Are you ready for an exciting, immersive experience in Workload Automation?
As human beings, we have always been fascinated by the unknown, and we need to understand it, interpret it, and draw results from it. Today, however, living in an increasingly hyper-technological, interconnected world, where we collect billions and billions of information and data, we have difficulty because our ability to acquire data exceeds our ability to give meaning to it. Besides data itself, visualization becomes crucial in driving the root cause analysis, explaining concepts, and extracting useful insights from data. Also, visualization can help provide data that is understandable by non-data experts.
Automation | December 1, 2020
Exploit the new commands available for the WA plugin for Zowe CLI V1.1.0
Zowe and its major components Web UI, API Mediation Layer, and CLI, are likely to become the new interface for the next generation of mainframers.The Zowe framework is the bridge that connects modern applications with the mainframe by providing easier interoperability and scalability among products and solutions offered by multiple vendors. Developers, testers, operators, and any other professional in the mainframe realm can easily create their tools to automate those tasks that usually would be done manually or through mainframe-native tools. They can build, modify, and debug z/OS applications even with limited z/OS expertise.
Automation | November 18, 2020
Manage your message delivery system by using Amazon Simple Notification Service (SNS) with Workload Automation
Amazon Simple Notification Service (SNS) is a managed service that provides message delivery from publishers to subscribers (also known as producers and consumers). Publishers communicate asynchronously with subscribers by sending messages to a topic, which is a logical access point and communication channel. Clients can subscribe to the SNS topic and receive published messages using a supported protocol, such as Amazon SQS, AWS Lambda, HTTP, email, mobile push notifications, and mobile text messages.
Automation | November 18, 2020
Simplify Your Queue Service using AWS SQS Plugin with Workload Automation
Let us understand first about AWS Simple Queue Service (SQS), how Amazon SQS works, it is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.
Automation | November 18, 2020
Manage Your Blue Prism Process by using Blue Prism Plugin with Workload Automation
Before moving into Blue Prism plugin, let us understand what it is all and how it is benefits our workload automation users.Note: Blue Prism is an experimental plugin in our Automation Hub and it may support limited features.
Automation | November 6, 2020
All you need to know about how to setup SAP connection in Workload Automation
The SAP batch access method enables communication between an external SAP system and Workload Automation and provides a single point of entry for automating the launching of jobs, monitoring the status of jobs, and managing exceptions and recovery.“ …Which are the steps you need to follow to setup faster your Workload Automation environment to the target SAP system?...”
Automation | October 30, 2020
Docker Compose and Workload Automation containers: the 3 “S” joint venture
Are you familiar with docker-compose in the Workload Automation (WA) deployment process? It’s about time you started using it to deploy Workload Automation containers. The 3 “S” joint venture of WA, along with Docker Compose, stand by the following slogans: Simplicity?Not more than 5 steps. Speed?Just enough time for a coffee break. Security?No worries, we take care of everything.
Automation | October 27, 2020
Workload Automation and SAP best performance together
Using Workload Automation integrated with SAP®, you can create, schedule, and control SAP jobs and monitoring you SAP landscape. SAP jobs run on application servers that host work processes of type batch. Critical batch jobs are run in specific time frames, on specific application servers. With SAP Basis version 6.10 and later, application servers can be assigned to server groups. With Workload Automation, you can assign a server group to a job and leveraging the Job Throttling feature it can manages all SAP background processes from several applications on one or more servers in heterogeneous environments. In this way, when a job is launched, the SAP system runs it on an application server that belongs to the specified group, balancing the workload among the various application servers.
Automation | October 23, 2020
Passing Variables from an Event Rule to a Job Stream
Event Rules are an extension of Workload Automation (WA) capabilities that enable events occurring external to the scheduling environment to trigger actions on scheduling objects within WA. An ideal use of this capability is to detect the arrival of a file and then trigger an action to submit a Job Stream containing jobs to process the data contained in that file. This capability has been available for a while and is widely used. In this article, a hidden feature is explored where the name of the file and other properties related to the file are passed as variables to a Variable Table associated to the Job Stream being submitted as ac action where any Job within that Job Stream can retrieve those variables and process the data in the file
Automation | October 16, 2020
Creating a File Dependency for a Job Stream using Start Condition
Waiting for a file to arrive before a job can start processing it is the most common and quintessential requirement for any workload automation tool. Historically, this was done by creating a File dependency using OPENS and Unix tricks to manage wild cards and multiple matching files. Then Event Driven Workload Automation was introduced where an Event Rule could monitor a file with wild cards and when the condition was satisfied the dependent Job Stream was submitted. %
Automation | October 9, 2020
Manage your Automation by using “Automation Anywhere Bot Runner” and “Trader Plugin” with Workload Automation
Are you curious to try out the "Automation Anywhere" plugin? Download the integrations from the Automation Hub and get started today. Visit http://yourautomationhub.io.
Automation | October 8, 2020
Manage Your Data Using GCP Cloud Storage with Workload Automation
The HCL Workload Automation GCP Cloud Storage plugin is here. Learn how it benefits our workload automation users.
Automation | September 29, 2020
The power of Ansible inside Workload Automation
Can’t get enough of automating your business processes? We have what you are looking for! The Ansible plug-in is available on Automation Hub, download it to empower your Workload Automation environment. Adding the Ansible plug-in, you can monitor all your Ansible processes directly from the Dynamic Workload Console. Furthermore, you can schedule when executing the Ansible playbooks, just creating a simple job definition. Before starting to use the plug-in, you need to install Ansible on the same machine where the dynamic agent that runs the Ansible job is installed. You also need to setup the SSH protocol to communicate with Ansible. Let’s demonstrate, through an example, how easy is to patch remote nodes with the Ansible plug-in. Job definition First, we need the code to patch the remote nodes, and usually it is written through a yum module in a yaml file. This kind of file is called playbook.yml, and we can add it by using the playbook path field. In the field we need to enter the absolute path to the playbook.yml file; we can use the Search button to search such path in the dynamic agent's file system. The content of the file is the following: --- - hosts: all name: Update packages tasks: - name: Update yum: name: " {{ module_name }} " state: latest Then we need to valorize the variable “module_name” as an extra argument to correctly execute the playbook. Thus, in the Environment variables section we insert, for example, “module_name” in the Name column and an asterisk (*) in the Value column. The asterisk indicates that Ansible will update all modules found on the target machine. Next, we need to specify the remote nodes to which Ansible should connect....
Automation | September 23, 2020
Accelerate your Cloud Transformation! Take advantage from HCL Workload Automation on AWS Marketplace.
"You may not be responsible for the situation you are in, but you will become responsible if you do nothing to change it." Cit. Martin Luther King. Get ready to accelerate your business by simplifying and automating workloads, improving service level agreements and reducing deployment and management time with your Cloud Transformation! Could transformation is now the solution to many questions and customer needs to save costs on IT operations and enable faster time to market for new products and capabilities. The real value of cloud transformation is the organization’s new ability to quickly consume the latest technology and rapidly adapt and respond to market needs. A business transformation is not complete if the automation of processes is not also managed at both the IT and application levels. In this context, HCL Software is a real innovator and leader in the workload automation market with the availability of the HCL Workload Automation (HWA) solution which, integrated with an automation Bot, HCL Clara, provides the answer for a complete orchestration, monitoring and reporting of scheduled batch processes both on-premise and on the Cloud. To respond to the growing request to make automation opportunities more accessible especially on the Cluod, HCL Workload Automation is now offered on the Amazon Web Services cloud as Amazon Elastic Kubernetes Service (Amazon EKS), a fully managed Kubernetes service with high security, reliability, and scalability values. The strength of the innovation that HCL carry out with continuous and long-term investments is based on enabling the adoption of technology by minimizing the Total Cost of Ownership (TCO) and adopting paradigms such as containerization to facilitate product implementation and facilitate the transition on new releases. With the new release of HWA available on the AWS Catalogue, we are proud to enable the digital transformation of our...
Automation | August 28, 2020
Manage your AWS resources by using AWSCloudFormation with Workload Automation
Let us begin with understanding of AWSCloudformation what it is all about before moving to our AWSCloudformation plugin and how it is benefits to our workload automation users. AWS CloudFormation provides a common language for you to model and provision AWS and third-party application resources in your cloud environment. AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. This gives you a single source of truth for your AWS and third-party resources. Coming to our AWSCloudformation, below diagram summarizes what is our plugin can perform, so our workload customers can make use of this to simplify their infrastructure management as well as easy to implement changes to infrastructure. To give more clarity on its benefits let us understand with below example, For a scalable web application that also includes a back-end database, you might use an Auto Scaling group, an Elastic Load Balancing load balancer, and an Amazon Relational Database Service database instance. Normally, you might use each individual service to provision these resources. And after you create the resources, you would have to configure them to work together. All these tasks can add complexity and time before you even get your application up and running. Instead, you can create or modify an existing AWS CloudFormation template. A template describes all your resources and their properties. When you use that template to create an AWS CloudFormation stack, AWS CloudFormation provisions the Auto Scaling group, load balancer, and database for you. After the stack has been successfully created, your AWS resources are up and running. You can delete the stack just as easily, which deletes all the resources in the stack....
Automation | August 28, 2020
Simplify The Data Loading Using Oracle UCM and HCM Data Loader plugins with Workload Automation
Customers using Oracle Human Resources Cloud face the challenge of continuous bulk loading of large amounts of data at regular intervals. Oracle Human Resources Cloud provides the tools like HCM Data Loader which address this business use case. Now you can automate data loading into Oracle Human Resources cloud using the Oracle UCM and Oracle HCM Data Loader plugins which leverage the HCM Data Loader for the Workload automation users. Business Process automated: Source: https://docs.oracle.com/en/cloud/saas/human-resources/20a/faihm/introduction-to-hcm-data-loader.html#FAIHM1372446 The above diagram shows the business process automated through these plugins: This process is divided into 2 steps and hence the 2 plugins: A .zip file containing .dat files is placed on the Oracle WebCenter Content server. Here the Oracle WebCenter Content server acts as a staging infrastructure for files that are loaded and processed by the HCM Data Loader. HCM Data Loader imports the data first into its stage tables and then into application tables. Any errors that occur during either the import phase or load phase are reported in the job status and details in job log. Technical description and workflow Oracle UCM plugin The Oracle UCM enables you to stage data files for processing to HCM Data Loader. It provides easier integration with other business processes by using the Oracle Universal Content Management (UCM) integration. The Oracle UCM Data Loader automates the process of bulk-loading of data. You can load the data files and monitor them from a single point of control. The data is uploaded as zip files to Oracle UCM, which is processed by the HCM Data Loader. This integration helps you save time, resources, and speed up data loading in a secure manner. Prerequisite for the plugins to work: - Oracle Human Resources Cloud service account with correct permissions to access File Import and Export task...
Automation | August 19, 2020
Workload Automation – Customer-centric approach
A customer-centric company is more than a company that offers good customer service. Customer-centric is our HCL Software business philosophy based on putting our customers first and at the core of business in order to provide a positive experience and build long-term relationships. In today’s uncertain world, not even the best contract can capture what will change tomorrow. A contract can only convert to business advantage through a value centric relationship. In the Workload Automation family, we strongly believe in customer collaboration and we have several programs that helps us to nurture relationship with our customers and involve them in the product design and evolution. The Client Advocacy Program is aimed to accelerate customer’s success and to create strategic relationships with HCL’s technical, management and executive leaders. The mission of our Client Advocacy Program is to build a direct relationship with our customers. We really want to be able to hear their voice. The User experience (UX) design in HCL is based on the Design Thinking approach, that relies on users to stay in touch with real-world needs. We work with users to design and build the solution to their needs through continuous participation of the same users in the design process. We really want to bring the user voice in the product design and development. What this actually means? We take care of the relationship with each customer, no matter of the program. The programs are often just the first engagement: everything can start from a specific request or by pure chance. From the very first meeting with our customer we focus on addressing her/his needs and building trust, no matter if it happens in an Ask the Expert or in a Design Thinking session. We have tons of successful stories that have started from a simple question or even complaint. The entire product team takes care of each customer by looking for the subject matter expert to answer each question. The Customer Advocates often are the first point of contact in the entire organization. They are the customer best buddy; they nurture the relationship with constant...
Automation | August 19, 2020
How To Make The Most out of ODI plugin in Workload Automation
Oracle Data Integrator provides a fully unified solution for building, deploying, and managing complex data warehouses or as part of data-centric architectures in a SOA or business intelligence environment. In addition, it combines all the elements of data integration-data movement, data synchronization, data quality, data management, and data services-to ensure that information is timely, accurate, and consistent across complex systems. Oracle Data Integrator (ODI) features an active integration platform that includes all styles of data integration: data-based, event-based and service-based. ODI unifies silos of integration by transforming large volumes of data efficiently, processing events in real time through its advanced Changed Data Capture (CDC) framework and providing data services to the Oracle SOA Suite. It also provides robust data integrity control features, assuring the consistency and correctness of data. With powerful core differentiators - heterogeneous E-LT, Declarative Design and Knowledge Modules - Oracle Data Integrator meets the performance, flexibility, productivity, modularity and hot-pluggability requirements of an integration platform. In order to leverage the benefits out of ODI plugin in workload automation, we have classified in to two categories. Oracle Data Integrator Scenario Oracle Data Integrator Load Plan 1.Orace Data Integrator Scenario: A scenario is the partially-generated code (SQL, shell, etc) for the objects (interfaces, procedures, etc.) contained in a package. When a component such as an ODI interface or package has been created and tested, you can generate the scenario corresponding its actual state. Once generated, the scenario's code is frozen, and all subsequent modifications of the package and/or data models which contributed to its creation will not affect it. It is possible to generate scenarios for packages, procedures, interfaces or variables. Scenarios generated for procedures, interfaces or variables are single step scenarios that execute the procedure, interface or refresh the variable. 2.Oracle Data Integrator Load Plan: Oracle Data Integrator is often...
Automation | August 13, 2020
Automate Project Create, Delete & Update with Google Cloud Deployment Manager using Workload Automation
Do you need to create, delete, update a lot of Google Cloud Platform (GCP) projects? Maybe the sheer volume or the need to standardize project operation is making you look for a way to automate project managing. We now have a tool to simplify this process for you. Workload Automation announcing GCPDeoploymentManager Plugin. GCPDeoploymentManager Plugin automates the creation and management of Google Cloud resources. You can upload flexible template and configuration files to create and manage your GCP resources, including Compute Engine (i.e., virtual machines), Container Engine, Cloud SQL, BigQuery and Cloud Storage. You can use GCPDeoploymentManager Plugin to create and manage projects, whether you have ten or ten thousand projects, automating the creation and configuration of your projects with GCPDeployment Manager allows you to manage projects consistently. Now, you can use GCPDeoploymentManager Plugin from workload automation to create and manage projects It allows you to specify all the resources needed for your application in a declarative format using yaml. You can parameterize the configuration and allow reuse of common deployment paradigms such as a load balanced, auto-scaled instance group. Treat your configuration as code and perform repeatable deployments. The user can focus on the set of resources which comprise the application or service instead of deploying each resource separately. It provides templates that allow the use of building blocks to create abstractions or sets of resources that are typically deployed together (e.g. an instance template, instance group, and auto scaler). These templates can be parameterized to allow them to be used over and over by changing input values to define what image to deploy, the zone in which to deploy, or how many virtual machines to deploy. Prerequisite for the plugins to work: User should have a service account. And the service account should have access to...
Automation, Innovations | August 11, 2020
Introducing HCL Automation Power Suite Bundle to Automate More, Better and Smarter
HCL Software announced the introduction of HCL Automation Power Suite bundle offering comprising of HCL Workload Automation, HCL Clara and HCL HERO. With Automation Power Suite, customers can automate more, automate better and automate smarter to build an enterprise automation platform.
Automation | August 7, 2020
Case Study: SAP Factory Calendar Import with HCL Workload Automation
This blog aims to show how SAP Calendar Import could be done through Workload Automation. Workload Automation as a product has ready made integration with SAP since the 90’s leveraging the SAP RFC Libraries of SAP using the SAP R/3 Batch Access Method. Now , we would see how we can use this same access method to import Freeday Calendars or Workday Calendars from an SAP R/3 System into Workload Automation. The r3batch access method can be invoked from TWS/methods Directory(in the older versions) or from the TWSDATA/methods directory in the Newer versions . The export can be for both Freeday Calendars as well as Workday Calendars. The below example is an export of a Freeday Calendar referenced by the Factory Calendar ID 02 exported into a text file /tmp/calendar_03.dat with the name HLI : wauser@wa-server:/opt/wa/TWS/methods$ ./r3batch -t RSC -c S4HANAR3BW -- " -calendar_id 02 -year_from 2020 -year_to 2021 -tws_name HLI -getfreedays -filename '/tmp/calendar_03.dat' " Tue Mar 10 09:48:58 2020 -t RSC represents that the import is for an RFC SAP Calendar. -c CalendarName represents that the Calendar name which is imported from the specific SAP System. -calendar_id XX denotes a 2 Character identifier of the SAP R/3 Calendar to be imported. -year_from XXXX denotes the Start year from which to start exporting the dates. -year_to XXXX denotes the End Year upto which you can export dates. -getfreedays indicates that the export is for Freedays. -filename ‘<PATH>/CalendarFileName’ indicates the name of the file to which export is to be done on the Host OS where you are issuing the command. The exported Calendar can be viewed in the File as shown below : wauser@wa-server:/opt/wa/TWS/methods$ cat /tmp/calendar_03.dat $CALENDAR HLI "" 01/01/2020 01/04/2020 01/05/2020 01/11/2020 01/12/2020 01/18/2020 01/19/2020 01/25/2020 01/26/2020 02/01/2020 02/02/2020 02/08/2020 02/09/2020 02/15/2020 02/16/2020 02/22/2020 02/23/2020...
Automation | August 4, 2020
Unleash the power of HCL Workload Automation in an Amazon EKS cluster
Don't get left behind! The new era of digital transformation of businesses has moved on to new operating models such as containers and cloud orchestration. Let’s find out how to get the best of Workload Automation (WA) by deploying the solution on a cloud-native environment such as Amazon Elastic Kubernetes Service (Amazon EKS). This type of deployment makes the WA topology implementation 10x easier, 10x faster, and 10x more scalable compared to the same deployment in an on-premises classical platform. In an Amazon EKS deployment, to best fit the cloud networking needs of your company, you can select the appropriate networking cloud components supported by the WA Helm chart to be used for the server and console components: Load balancers Ingresses You can also leverage the Grafana monitoring tool to display WA performance data and metrics related to the server and console application servers (WebSphere Application Server Liberty Base). Grafana needs to be installed manually on Amazon EKS to have access to Grafana dashboards. Metrics provide drill-down for the state, health, and performance of your WA deployment and infrastructure. In this blog you can discover how to: Deploy WA components (Server, Agent, Console) in an Amazon EKS cluster, using one of the available cloud network configurations. Download the Kubernetes job plug-in from the Automation Hub website and configure it in your AWS EKS cloud environment. Monitor the WA solution from the WA customized Grafana Dashboard. Let’s start by taking a tour!!! Deploy WA components (Server, Agent, Console) in an Amazon EKS cluster, using one of the available network configurations In this example, we set up the following topology for the WA environment and we configure the use of the ingress network configuration for the server and console components: 1 server 2 dynamic agents 1 console Let’s demonstrate how you can roll out the deployment...
Automation | August 3, 2020
How to Automate SAP HANA Lifecycle Management in Workload Automation
Before knowing about our plugin use cases and how it benefits to our workload automation users, let us have little insight about what is SAP HANA LCM Cloud Platform. SAP Cloud Platform (SCP) is a platform-as-a-service (PaaS) product that provides a development and runtime environment for cloud applications. Based in SAP HANA in-memory database technology, and using open source and open standards, SCP allows independent software vendors (ISVs), startups and developers to create, deploy and test HANA-based cloud applications. SAP uses different development environments, including Cloud Foundry and Neo, and provides a variety of programming languages. Neo is a feature-rich and easy-to-use development environment, allowing you to develop, deploy and monitor Java, SAP HANA XS, and HTML5 applications. SAP HANA LCM plugin can automate and orchestrate some of the deploy and monitor functionalities of Java application like state, start, stop, delete, redeploy. Let’s see what our plugin does. Log in to the Dynamic Workload Console and open the Workload Designer. Choose to create a new job and select “SAP HANA Cloud Platform Application Lifecycle” job type in the ERP section. Select the General tab and specify the required details like Folder, Name, and Workstation Establishing connection to the SAP HANA Cloud Platform: In the connection tab we need to specify the input parameters like Hostname, Port, Account name and Account credentials to let workload Automation interact with SAP HANA cloud and click Test Connection. A confirmation message is displayed when the connection is established. Certification and Retry options are optional fields. In Action tab specify the Application Name and perform the action based on the requirement, we have different kind of actions here like State, Start, Stop and Re-Deploy and Delete State: It will present the current state of application Start: Start the application Stop: Stop the application Re-Deploy: Update application/binaries parameters and upload one or more binaries. Delete: Delete the application Click Search button, opens a popup...
Automation | August 3, 2020
Make the deployment easier, get the most from Workload Automation in OpenShift
Can you have your desired number of Workload Automation (WA) agent, server and console instances running whenever and wherever? Yes, you can! Starting from the Workload Automation version 9.5 fix pack 2, you can deploy the server, console and dynamic agent by using Openshift 4.2 or later version platforms. This kind of deployment makes Workload Automation topology implementation 10x faster and 10x more scalable compared to the same deployment in the on-prem classical platform. Workload Automation provides you an effortless way to create, update and maintain both the installed WA operator instance and WA component instances by leveraging also the Operators feature introduced by Red Hat starting from OCP 4.1 version. In this blog, we address the following use cases by using the new deployment method: Download and Install WA operator and WA component images Scheduling - Scale UP and DOWN the WA instances number Monitoring WA resources to improve Quality of Service by using OCP dashboard console Download and Install WA operator and WA components images WA operator and pod instances prerequisites Before starting the WA deployment, ensure your environment meets the required prerequisites. For more information, see https://www.ibm.com/support/knowledgecenter/SSGSPN_9.5.0/com.ibm.tivoli.itws.doc_9.5/README_OpenShift.html Download WA operator and components images Download the Workload Automation Operator and product component images from the appropriate web site. Once deployed, the Operator can be used to install the Workload Automation components and manage the deployment thereafter. The following are the available packages: Flexera (HCL version): 9.5.0-HCL-IWS_OpenShift_Server_UI_Agent_FP0002.zip containing the images for all HCL components. Download this package to install either all or select components (agent, server, console). 9.5.0-HCL-IWS_OpenShift_Agent_FP0002 containing the HCL agent image Fixcentral (IBM version) 9.5.0-IBM-IWS_OpenShift_Server_UI_Agent_FP0002.zip containing the images for all IBM components. Download this package to install either all or select components (agent, server, console). 9.5.0-IBM-IWS_OpenShift_Agent_FP0002 containing the IBM agent image Each operator package has the following structure (keep...
Automation | July 31, 2020
Agents and Reports: Oracle Business Intelligence, A step ahead with Workload Automation
Are you familiar with Automation Hub! It is the time. Data! We have enough data to process. Analyze! We have multiple software and algorithms to analyze and process the data. Question is how to represent or publish the processed data together as a report. There are few which do this favor. But, will the generated reports be flexible for reuse? Will the report maintenance be easy? Will they have an optimized data extraction and data generation process? Answering is tough. Because it is not easy to maintain and produce efficient reports from huge data consistently with an ease. Big fishes like Oracle, IBM will have the trick to satisfy the customer with the required features. There comes our plugin OBI Run Report to answer all the queries with minimal effort which is associated with Oracle BI publisher. This can be combined with the workload automation tool by the users. Yes, reports are generated and published with the requirements. How to reach them to the customers. Simple answer is by using agents. Agents can deliver the reports to the customers based on the trigger events and the targets. Targets can be different, in other way, the delivery routes can be varying by multiple conditions and requirements. Agents are triggered by schedules or conditions that in turn generates a request to perform analytics on data based upon defined criteria, which can be used for reports scheduling as well as alerts sent to the required recipients on different web accessible / communication devices. Agents also provide proactive delivery of real-time, personalized, and actionable intelligence throughout the business network. As a next block of feature, we introduce OBI agent which help to satisfy the requirement. This plugin shares the features with Oracle iBot/agent in all aspects and they are part of session-based web services...
Automation | July 27, 2020
Enforce Workload Automation continuous operation by activating Automatic failover feature and an HTTP load balancer
How important is that your Workload Automation environment is healthy, up and running, and there are no workload stops or delays? What happens if your Master Domain Manager becomes unavailable or it is affected by downtime? What manual recovery solution you must do when it happens? How can I distribute simultaneously requests to several application servers in my configurations if my primary server is drowning? How can I hourly monitor the workload automation environment healthy in an easy way? How can I have an alerting mechanism? The answer is: Workload Automation 9.5 FP2 with Automatic failover feature enabled, combined with NGINX load balancer! Let start to introduce the components participating to the solution: = Workload Automation 9.5 FP2 introduces the Workload Automatic failover feature = When the active master domain manager becomes unavailable, it suddenly enables an automatic switchover to a backup engine and event processor server. It ensures continuous operation by configuring one or more backup engines so that when a backup detects that the active master becomes unavailable, it triggers a long-term switchmgr operation to itself. You can define potential backups in a list by adding preferential backups at the top of the list. The backup engines monitor the behaviour of the master domain manager to detect anomalous behaviour. = NGINX load balancer = Load balancing across multiple application instances is a commonly used technique for optimizing resource utilization, maximizing throughput, reducing latency, and ensuring fault-tolerant configurations. It is possible to use NGINX as a very efficient HTTP load balancer to distribute traffic to several application servers and to improve performance, scalability and reliability of web applications. Nginx acts as a single-entry point to a distributed web application working on multiple separate servers. Let continue analyzing our use case solution: We experiment the solution by defining and using this environment during the formal test phase for 9.5 FP2 project. ...
Automation | July 27, 2020
Custom dashboard: the fifth element that gives you control over all of your environments
No matter if your environment is based on rock solid z/OS controller or on light weight and easily scalable docker instances, or if your distributed, on premises master and backup master are rocking your workload as fire and water. Earth, wind, water and fire… if you want to have control over each element you need the fifth spirt: your custom dashboard! It’s easy to create and customize your dashboard to have control over every single important aspect for you and your organization at a glance. Each dashboard is composed by several data sources and widgets that can be customized and combined together in the new era of dashboards. (ref 15-Jun-20 blog post “Welcome to the new Era of Dashboards” ). But you can also optimize your dashboard to monitor different kinds of environments all together. Let’s see how it works. Cross-engine widgets If you need an overview of the entire workload across all of your environments, you can use for example the Jobs count by status datasource in a pie chart to have a quick overview of how many jobs are waiting, running or ended in error or in successful state. To make this datasource and widget works across multiple environment you need to add first an engine list. D engine list and Z engine list are optimized for homogeneous environment, while for an hybrid (distributed and z/OS) environment you have to select the Engine list. At this point you can add also the desired widget and customize all fields as you can see below. Widgets based on datasource with pre-defined engine. However, the best way to monitor hybrid environment is to use specific datasources for each engine. For example, if you need to monitor the Critical jobs Duplicate the Critical jobs by status datasource and name it after the...
Automation | July 24, 2020
Stay tuned! New Workload Automation instructional videos coming your way!
How-to videos are now live on the Workload Automation YouTube channel. Video is a unique way of communicating a lot of information in a short period of time. They allow you to retain more information, understand concepts more rapidly and make you more enthusiastic about what you are learning. The Workload Automation team has been thinking of you, and is hard at work developing how-to tutorial type videos to introduce you to new features and help you in realizing both frequent everyday scenarios as well as more complex ones. Want to know how you can configure authentication using LDAP in Version 9.5? Or, want to see how you can monitor Workload Automation, in a Red Hat OpenShift cluster environment using the open source tool Grafana for visualizing application metrics? How about learning how to replace the default DWC welcome page with a customized page that embeds an external website, data returned from an external REST API, and some static text? These are just some examples of what you can discover on the channel today! Want to see more? Browse this list of videos to discover maybe something new that you weren’t aware of or a time-saving tip to make your workload lighter. Author Video Link Alessandro Tomasi HCL Workload Automation - Designing and submitting your workload The video shows how to create a job stream with run cycle, add a job, submit a job stream into a plan, and see the job log. https://www.youtube.com/watch?v=HxNz0AjV7lM&list=PLZ87gBR2Z8047IaJRZShgFyXJZYmIlUsY&index=12 Alessandro Tomasi e Michele Longobardo HCL Workload Automation - Monitoring Workload Automation components using Grafana Dashboard The video gives an overview of the new Grafana dashboard. You can see the jobs, the message box usage, pods status and liberty metrics. https://www.youtube.com/watch?v=FPspjh2ZT2M Alessandro Tomasi HCL Workload Automation - Using variable tables to create more flexible workloads The...
Latest Articles

Automation | February 9, 2021
Automate Anything, Run Anywhere, starting from the Workload Automation Test environment!
Automate anything, Run Anywhere is our motto!What better place to start making it real than from our very own Workload Automation test environment?We have been applying it in our Test Lab for Workload Automation version 9.5 since its general availability in 2019, and to all the subsequent fix packs, including the last one, fix pack 3.

Automation | February 1, 2021
How to move your Workload Automation on-premises deployment to cloud
To improve productivity and to save IT management costs and time, our customers are starting to move to the cloud environment and are moving heavy workloads from a set of on-premises machines to a resilient cloud infrastructure. So, our principal mission is to facilitate, secure, and support moving a workload automation deployment, from an on-premises infrastructure (minimum WA 95 FP3 version required) to any cloud infrastructure where Workload Automation (WA) is already supported.Link for the video guide: https://youtu.be/7AQHgCnpqLc

Automation | January 14, 2021
Safeguarding Carryforwards during a Migration to WA 9.5
A question which everyone would have in mind while Upgrading to WA 9.5 is to manage all the Carryforwards that are present in old Production Plan on the older Version of the Master and to migrate them to the newer Master Server . This Blog aims to sort this problem once and for all to ensure seamless transition to IWS 9.5 without any hassles .As you would already know if you are reading this Blog that WA 9.5 comes with a whole set of New Features and most noticeable Change Architecturally is to move to Liberty as a Middleware in place of Websphere Application Server, JazzSM for both the Engine as well the DWC Profile.

Automation | January 14, 2021
Manage your Azure Resource by using Azure Resource Manager with Workload Automation
Let us begin with understanding of Azure what it is all about before moving to our Azure Resource Manager plugin and how it benefits our workload automation users. Azure is incredibly flexible, and allows you to use multiple languages, frameworks, and tools to create the customised applications that you need. As a platform, it also allows you to scale applications up with unlimited servers and storage.

Automation, Innovations | December 22, 2020
Are you ready for an exciting, immersive experience in Workload Automation?
As human beings, we have always been fascinated by the unknown, and we need to understand it, interpret it, and draw results from it. Today, however, living in an increasingly hyper-technological, interconnected world, where we collect billions and billions of information and data, we have difficulty because our ability to acquire data exceeds our ability to give meaning to it. Besides data itself, visualization becomes crucial in driving the root cause analysis, explaining concepts, and extracting useful insights from data. Also, visualization can help provide data that is understandable by non-data experts.

Automation | December 1, 2020
Exploit the new commands available for the WA plugin for Zowe CLI V1.1.0
Zowe and its major components Web UI, API Mediation Layer, and CLI, are likely to become the new interface for the next generation of mainframers.The Zowe framework is the bridge that connects modern applications with the mainframe by providing easier interoperability and scalability among products and solutions offered by multiple vendors. Developers, testers, operators, and any other professional in the mainframe realm can easily create their tools to automate those tasks that usually would be done manually or through mainframe-native tools. They can build, modify, and debug z/OS applications even with limited z/OS expertise.
Never Miss an Update
Subscribe to the HCL Software Blog weekly digest and stay informed about the latest content from industry leaders across HCL.
Upcoming Event
No upcoming events
No articles to display at the moment. Amazing content in the works!