No matter if your environment is based on rock solid z/OS controller or on light weight and easily scalable docker instances, or if your distributed, on premises master and backup master are rocking your workload as fire and water. 

Earth, wind, water and fireif you want to have control over each element you need the fifth spirt: your custom dashboard! 

It’s easy to create and customize your dashboard to have control over every single important aspect for you and your organization at a glance. 

Each dashboard is composed by several data sources and widgets that can be customized and combined together in the new era of dashboards. (ref 15-Jun-20 blog post “Welcome to the new Era of Dashboards). 

But you can also optimize your dashboard to monitor different kinds of environments all together. Let’s see how it works. 

Cross-engine widgets 

If you need an overview of the entire workload across all of your environments, you can use for example the Jobs count by status datasource in a pie chart to have a quick overview of how many jobs are waiting, running or ended in error or in successful state. 

To make this datasource and widget works across multiple environment you need to add first an engine list 

D engine list and Z engine list are optimized for homogeneous environment, while for an hybrid (distributed and z/OS) environment you have to select the Engine list. 

At this point you can add also the desired widget and customize all fields as you can see below. 

Widgets based on datasource with pre-defined engine. 

However, the best way to monitor hybrid environment is to use specific datasources for each engine. 

For example, if you need to monitor the Critical jobs 

  • Duplicate the Critical jobs by status datasource and name it after the engine name 
  • Edit it 
  • Deselect the checkbox “Select this option if you want the datasource to be based on engine selection 
  • Add the engine name and the engine owner in the URL 
  • Save it 

Repeat the three steps for each engine. The customization steps are the same for distributed and z/OS engine. 

Now that your 4 datasources are ready you can go back to your dashboard and easily create the four widgets. 

As you can see, once you have customized your first widget, you can just duplicate it and change the associated datasource. It’s easy and time saving; you can take advantage of this tip every time you want to define multiple widgets on similar datasources. 

Add filters to your datasources 

You can also refine the datasources to monitor specific subset of your workload, for example to count only the jobs belonging to a specific Line of Business or the workstations matching a specific naming convention. 

In case you are working on a REST datasource, such as the job count by status, you can just start from an existing datasource and duplicate it. 

Remember to deselect the checkbox “Select this option if you want the datasource to be based on engine selection” and specify the engine name and owner (if they are not already configured). Then you have to simply add the desired filters in the body section. 

 

Note that the filters available on distributed engines are JOB_NAME, JOB_STREAM_NAME, JOB_WKS_NAME and WORKSTATION, while on z/OS engines are allowed only the filters on JOB_NAME, JOB_STREAM_NAME and JOB_WKS_NAME. 

In case you are working on a Plan datasource, such as the Available or Unavailable Workstations, it’s even easier: 

  • Create a new Plan datasource or duplicate an existing one; 
  • Select the desired Engine and the Object Type you are looking for; 
  • Select the “Express output in numbers” and deselect the “Specify engine from board to optimize the performance of the datasource; 
  • Click on the Edit button to display and fill in all the desired filters available for the current datasource. 
  • Save it 

 

Once that you have learned how to manage, customize, filter and even optimize your plan and rest api datasources you can basically have under control everything in your environment. No matter if you are interested in the unanswered prompts on your distributed master or in the special resources on your z/OS controller, now you can tames all of your environments from a single point of control. 

Authors

Davide Canalis, Software Engineer 

Davide is graduated in Mathematics, works as a Software Engineer at HCL Products and Platforms in Rome software development laboratory and he is a member of z/OS development IZWS team since April 2017 becoming the RestApi expert of the team. 

Linkedin: https://www.linkedin.com/in/davide-canalis-659b84135/ 

 

Elvira Zanin, Software Engineer, HCL Technologies 
 
Elvira Zanin is a Software Engineer on the Workload Automation development team located in HCL Rome Hub. She is currently based in the WebUI development team but she was involved in Test automation team and Add-ons development team also. Elvira has experience with Dynamic Workload Console. She completed her degree in Computer Science at University of Salerno and currently lives in Rome, Italy. 

Linkedin: https://www.linkedin.com/in/elvira-zanin-330893110/   

Comment wrap
Further Reading
article-img
Automation  /  August 13, 2020
Automate Project Create, Delete & Update with Google Cloud Deployment Manager using Workload Automation
By: Umesh Kumar Mahato, Developer, HCL Software
Do you need to create, delete, update a lot of Google Cloud Platform (GCP) projects? Maybe the sheer volume or the need to standardize project operation is making you look for a way to automate project managing. We now have a tool to simplify this process for you. Workload Automation announcing GCPDeoploymentManager Plugin. GCPDeoploymentManager Plugin automates the creation and management of Google Cloud resources. You can upload flexible template and configuration files to create and manage your GCP resources, including Compute Engine (i.e., virtual machines), Container Engine, Cloud SQL, BigQuery and Cloud Storage. You can use GCPDeoploymentManager Plugin to create and manage projects, whether you have ten or ten thousand projects, automating the creation and configuration of your projects with GCPDeployment Manager allows you to manage projects consistently.   Now, you can use GCPDeoploymentManager Plugin from workload automation to create and manage projects It allows you to specify all the resources needed for your application in a declarative format using yaml. You can parameterize the configuration and allow reuse of common deployment paradigms such as a load balanced, auto-scaled instance group. Treat your configuration as code and perform repeatable deployments. The user can focus on the set of resources which comprise the application or service instead of deploying each resource separately. It provides templates that allow the use of building blocks to create abstractions or sets of resources that are typically deployed together (e.g. an instance template, instance group, and auto scaler). These templates can be parameterized to allow them to be used over and over by changing input values to define what image to deploy, the zone in which to deploy, or how many virtual machines to deploy. Prerequisite for the plugins to work:   User should have a service account. And the service account should have access to...
article-img
Automation, Innovations  /  August 11, 2020
Introducing HCL Automation Power Suite Bundle to Automate More, Better and Smarter
By: Emanuela Zaccone, Product Manager, HCL Workload Automation
HCL Software announced the introduction of HCL Automation Power Suite bundle offering comprising of HCL Workload Automation, HCL Clara and HCL HERO. With Automation Power Suite, customers can automate more, automate better and automate smarter to build an enterprise automation platform.
article-img
Automation  /  August 7, 2020
Case Study: SAP Factory Calendar Import with HCL Workload Automation
By: Sriram V, Senior Tech Lead
This blog aims to show how SAP Calendar Import could be done through Workload Automation. Workload Automation as a product has ready made integration with SAP since the 90’s leveraging the SAP RFC Libraries of SAP using the SAP R/3 Batch Access Method.  Now , we would see how we can use this same access method to import Freeday Calendars or Workday Calendars from an SAP R/3 System into Workload Automation.  The r3batch access method can be invoked from TWS/methods Directory(in the older versions) or from the TWSDATA/methods directory in the Newer versions . The export can be for both Freeday Calendars as well as Workday Calendars. The below example is an export of a Freeday Calendar referenced by the Factory Calendar ID 02 exported into a text file /tmp/calendar_03.dat with the name HLI :  wauser@wa-server:/opt/wa/TWS/methods$ ./r3batch -t RSC -c S4HANAR3BW -- " -calendar_id 02 -year_from 2020 -year_to 2021 -tws_name HLI -getfreedays -filename '/tmp/calendar_03.dat' "  Tue Mar 10 09:48:58 2020  -t RSC represents that the import is for an RFC SAP Calendar.  -c CalendarName represents that the Calendar name which is imported from the specific SAP System.  -calendar_id XX denotes a 2 Character identifier of the SAP R/3 Calendar to be imported.  -year_from XXXX denotes the Start year from which to start exporting the dates.  -year_to XXXX denotes the End Year upto which you can export dates.  -getfreedays indicates that the export is for Freedays.  -filename ‘<PATH>/CalendarFileName’ indicates the name of the file to which export is to be done on the Host OS where you are issuing the command.  The exported Calendar can be viewed in the File as shown below :  wauser@wa-server:/opt/wa/TWS/methods$ cat /tmp/calendar_03.dat  $CALENDAR  HLI    ""    01/01/2020 01/04/2020 01/05/2020 01/11/2020 01/12/2020 01/18/2020 01/19/2020    01/25/2020 01/26/2020 02/01/2020 02/02/2020 02/08/2020 02/09/2020 02/15/2020    02/16/2020 02/22/2020 02/23/2020...
a/icon/common/search Created with Sketch.