Workload Automation
Number of Posts: 21
Filter By:
Number of Posts: 21
Automation | August 28, 2020
Manage your AWS resources by using AWSCloudFormation with Workload Automation
Let us begin with understanding of AWSCloudformation what it is all about before moving to our AWSCloudformation plugin and how it is benefits to our workload automation users.   AWS CloudFormation provides a common language for you to model and provision AWS and third-party application resources in your cloud environment. AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. This gives you a single source of truth for your AWS and third-party resources. Coming to our AWSCloudformation, below diagram summarizes what is our plugin can perform, so our workload customers can make use of this to simplify their infrastructure management as well as easy to implement changes to infrastructure.   To give more clarity on its benefits let us understand with below example,   For a scalable web application that also includes a back-end database, you might use an Auto Scaling group, an Elastic Load Balancing load balancer, and an Amazon Relational Database Service database instance. Normally, you might use each individual service to provision these resources. And after you create the resources, you would have to configure them to work together. All these tasks can add complexity and time before you even get your application up and running.   Instead, you can create or modify an existing AWS CloudFormation template. A template describes all your resources and their properties. When you use that template to create an AWS CloudFormation stack, AWS CloudFormation provisions the Auto Scaling group, load balancer, and database for you. After the stack has been successfully created, your AWS resources are up and running. You can delete the stack just as easily, which deletes all the resources in the stack....
Automation | August 28, 2020
Simplify The Data Loading Using Oracle UCM and HCM Data Loader plugins with Workload Automation
Customers using Oracle Human Resources Cloud face the challenge of continuous bulk loading of large amounts of data at regular intervals. Oracle Human Resources Cloud provides the tools like HCM Data Loader which address this business use case. Now you can automate data loading into Oracle Human Resources cloud using the Oracle UCM and Oracle HCM Data Loader plugins which leverage the HCM Data Loader for the Workload automation users. Business Process automated:       Source: https://docs.oracle.com/en/cloud/saas/human-resources/20a/faihm/introduction-to-hcm-data-loader.html#FAIHM1372446 The above diagram shows the business process automated through these plugins: This process is divided into 2 steps and hence the 2 plugins: A .zip file containing .dat files is placed on the Oracle WebCenter Content server. Here the Oracle WebCenter Content server acts as a staging infrastructure for files that are loaded and processed by the HCM Data Loader. HCM Data Loader imports the data first into its stage tables and then into application tables. Any errors that occur during either the import phase or load phase are reported in the job status and details in job log.   Technical description and workflow Oracle UCM plugin The Oracle UCM enables you to stage data files for processing to HCM Data Loader. It provides easier integration with other business processes by using the Oracle Universal Content Management (UCM) integration. The Oracle UCM Data Loader automates the process of bulk-loading of data. You can load the data files and monitor them from a single point of control. The data is uploaded as zip files to Oracle UCM, which is processed by the HCM Data Loader. This integration helps you save time, resources, and speed up data loading in a secure manner. Prerequisite for the plugins to work: - Oracle Human Resources Cloud service account with correct permissions to access File Import and Export task...
Automation | August 19, 2020
Workload Automation – Customer-centric approach
A customer-centric company is more than a company that offers good customer service. Customer-centric is our HCL Software business philosophy based on putting our customers first and at the core of business in order to provide a positive experience and build long-term relationships. In today’s uncertain world, not even the best contract can capture what will change tomorrow. A contract can only convert to business advantage through a value centric relationship. In the Workload Automation family, we strongly believe in customer collaboration and we have several programs that helps us to nurture relationship with our customers and involve them in the product design and evolution. The Client Advocacy Program is aimed to accelerate customer’s success and to create strategic relationships with HCL’s technical, management and executive leaders. The mission of our Client Advocacy Program is to build a direct relationship with our customers. We really want to be able to hear their voice. The User experience (UX) design in HCL is based on the Design Thinking approach, that relies on users to stay in touch with real-world needs. We work with users to design and build the solution to their needs through continuous participation of the same users in the design process. We really want to bring the user voice in the product design and development.   What this actually means? We take care of the relationship with each customer, no matter of the program. The programs are often just the first engagement: everything can start from a specific request or by pure chance. From the very first meeting with our customer we focus on addressing her/his needs and building trust, no matter if it happens in an Ask the Expert or in a Design Thinking session. We have tons of successful stories that have started from a simple question or even complaint. The entire product team takes care of each customer by looking for the subject matter expert to answer each question. The Customer Advocates often are the first point of contact in the entire organization. They are the customer best buddy; they nurture the relationship with constant...
Automation | August 13, 2020
Automate Project Create, Delete & Update with Google Cloud Deployment Manager using Workload Automation
Do you need to create, delete, update a lot of Google Cloud Platform (GCP) projects? Maybe the sheer volume or the need to standardize project operation is making you look for a way to automate project managing. We now have a tool to simplify this process for you. Workload Automation announcing GCPDeoploymentManager Plugin. GCPDeoploymentManager Plugin automates the creation and management of Google Cloud resources. You can upload flexible template and configuration files to create and manage your GCP resources, including Compute Engine (i.e., virtual machines), Container Engine, Cloud SQL, BigQuery and Cloud Storage. You can use GCPDeoploymentManager Plugin to create and manage projects, whether you have ten or ten thousand projects, automating the creation and configuration of your projects with GCPDeployment Manager allows you to manage projects consistently.   Now, you can use GCPDeoploymentManager Plugin from workload automation to create and manage projects It allows you to specify all the resources needed for your application in a declarative format using yaml. You can parameterize the configuration and allow reuse of common deployment paradigms such as a load balanced, auto-scaled instance group. Treat your configuration as code and perform repeatable deployments. The user can focus on the set of resources which comprise the application or service instead of deploying each resource separately. It provides templates that allow the use of building blocks to create abstractions or sets of resources that are typically deployed together (e.g. an instance template, instance group, and auto scaler). These templates can be parameterized to allow them to be used over and over by changing input values to define what image to deploy, the zone in which to deploy, or how many virtual machines to deploy. Prerequisite for the plugins to work:   User should have a service account. And the service account should have access to...
Automation | August 7, 2020
Case Study: SAP Factory Calendar Import with HCL Workload Automation
This blog aims to show how SAP Calendar Import could be done through Workload Automation. Workload Automation as a product has ready made integration with SAP since the 90’s leveraging the SAP RFC Libraries of SAP using the SAP R/3 Batch Access Method.  Now , we would see how we can use this same access method to import Freeday Calendars or Workday Calendars from an SAP R/3 System into Workload Automation.  The r3batch access method can be invoked from TWS/methods Directory(in the older versions) or from the TWSDATA/methods directory in the Newer versions . The export can be for both Freeday Calendars as well as Workday Calendars. The below example is an export of a Freeday Calendar referenced by the Factory Calendar ID 02 exported into a text file /tmp/calendar_03.dat with the name HLI :  wauser@wa-server:/opt/wa/TWS/methods$ ./r3batch -t RSC -c S4HANAR3BW -- " -calendar_id 02 -year_from 2020 -year_to 2021 -tws_name HLI -getfreedays -filename '/tmp/calendar_03.dat' "  Tue Mar 10 09:48:58 2020  -t RSC represents that the import is for an RFC SAP Calendar.  -c CalendarName represents that the Calendar name which is imported from the specific SAP System.  -calendar_id XX denotes a 2 Character identifier of the SAP R/3 Calendar to be imported.  -year_from XXXX denotes the Start year from which to start exporting the dates.  -year_to XXXX denotes the End Year upto which you can export dates.  -getfreedays indicates that the export is for Freedays.  -filename ‘<PATH>/CalendarFileName’ indicates the name of the file to which export is to be done on the Host OS where you are issuing the command.  The exported Calendar can be viewed in the File as shown below :  wauser@wa-server:/opt/wa/TWS/methods$ cat /tmp/calendar_03.dat  $CALENDAR  HLI    ""    01/01/2020 01/04/2020 01/05/2020 01/11/2020 01/12/2020 01/18/2020 01/19/2020    01/25/2020 01/26/2020 02/01/2020 02/02/2020 02/08/2020 02/09/2020 02/15/2020    02/16/2020 02/22/2020 02/23/2020...
Automation | August 4, 2020
Unleash the power of HCL Workload Automation in an Amazon EKS cluster
Don't get left behind! The new era of digital transformation of businesses has moved on to new operating models such as containers and cloud orchestration.  Let’s find out how to get the best of Workload Automation (WA) by deploying the solution on a cloud-native environment such as Amazon Elastic Kubernetes Service (Amazon EKS).  This type of deployment makes the WA topology implementation 10x easier, 10x faster, and 10x more scalable compared to the same deployment in an on-premises classical platform. ​   In an Amazon EKS deployment, to best fit the cloud networking needs of your company, you can select the appropriate networking cloud components supported by the WA Helm chart to be used for the server and console components:   Load balancers   Ingresses   You can also leverage the Grafana monitoring tool to display WA performance data and metrics related to the server and console application servers (WebSphere Application Server Liberty Base). Grafana needs to be installed manually on Amazon EKS to have access to Grafana dashboards. Metrics provide drill-down for the state, health, and performance of your WA deployment and infrastructure.  In this blog you can discover how to:  Deploy WA components (Server, Agent, Console) in an Amazon EKS cluster, using one of the available cloud network configurations.  Download the Kubernetes job plug-in from the Automation Hub website and configure it in your AWS EKS cloud environment.   Monitor the WA solution from the WA customized Grafana Dashboard.   Let’s start by taking a tour!!!  Deploy WA components (Server, Agent, Console) in an Amazon EKS cluster, using one of the available network configurations  In this example, we set up the following topology for the WA environment and we configure the use of the ingress network configuration for the server and console components:   1 server   2 dynamic agents   1 console  Let’s demonstrate how you can roll out the deployment...
a/icon/common/search Created with Sketch.