profile image
Serena Girardini
Workload Automation Test Technical Leader
About
Serena Girardini is the System Verification Test Team leader for the Workload Automation product in distributed environments. She joined IBM in 2000 as a Tivoli Workload Scheduler developer and she was involved in the product relocation from San Jose Lab to Rome Lab during a short term assignement in San Jose (CA). For 14 years, Serena gained experience in Tivoli Workload Scheduler distributed product suite as developer, customer support engineer, tester and information developer. She covered for a long time the role of L3 fixpack releases Test Team Leader and in this period she was a facilitator during critical situations and upgrade scenarios at customer site. In her last 4 years at IBM she became IBM Cloud Resiliency and Chaos Engineering Test Team Leader. She joined HCL in April, 2019 as expert Tester for IBM Workload Automation product suite and she was recognized as Test Leader for the product porting to the most important Cloud offerings in the market. She has a math bachelor degree.
Posts by Serena Girardini
article-img
Automation | February 9, 2021
Automate Anything, Run Anywhere, starting from the Workload Automation Test environment!
Automate anything, Run Anywhere is our motto!What better place to start making it real than from our very own Workload Automation test environment?We have been applying it in our Test Lab for Workload Automation version 9.5 since its general availability in 2019, and to all the subsequent fix packs, including the last one, fix pack 3.
article-img
Automation | February 1, 2021
How to move your Workload Automation on-premises deployment to cloud
To improve productivity and to save IT management costs and time, our customers are starting to move to the cloud environment and are moving heavy workloads from a set of on-premises machines to a resilient cloud infrastructure. So, our principal mission is to facilitate, secure, and support moving a workload automation deployment, from an on-premises infrastructure (minimum WA 95 FP3 version required) to any cloud infrastructure where Workload Automation (WA) is already supported.Link for the video guide: https://youtu.be/7AQHgCnpqLc
article-img
Automation | August 4, 2020
Unleash the power of HCL Workload Automation in an Amazon EKS cluster
Don't get left behind! The new era of digital transformation of businesses has moved on to new operating models such as containers and cloud orchestration.  Let’s find out how to get the best of Workload Automation (WA) by deploying the solution on a cloud-native environment such as Amazon Elastic Kubernetes Service (Amazon EKS).  This type of deployment makes the WA topology implementation 10x easier, 10x faster, and 10x more scalable compared to the same deployment in an on-premises classical platform. ​   In an Amazon EKS deployment, to best fit the cloud networking needs of your company, you can select the appropriate networking cloud components supported by the WA Helm chart to be used for the server and console components:   Load balancers   Ingresses   You can also leverage the Grafana monitoring tool to display WA performance data and metrics related to the server and console application servers (WebSphere Application Server Liberty Base). Grafana needs to be installed manually on Amazon EKS to have access to Grafana dashboards. Metrics provide drill-down for the state, health, and performance of your WA deployment and infrastructure.  In this blog you can discover how to:  Deploy WA components (Server, Agent, Console) in an Amazon EKS cluster, using one of the available cloud network configurations.  Download the Kubernetes job plug-in from the Automation Hub website and configure it in your AWS EKS cloud environment.   Monitor the WA solution from the WA customized Grafana Dashboard.   Let’s start by taking a tour!!!  Deploy WA components (Server, Agent, Console) in an Amazon EKS cluster, using one of the available network configurations  In this example, we set up the following topology for the WA environment and we configure the use of the ingress network configuration for the server and console components:   1 server   2 dynamic agents   1 console  Let’s demonstrate how you can roll out the deployment...
article-img
Automation | August 3, 2020
Make the deployment easier, get the most from Workload Automation in OpenShift
Can you have your desired number of Workload Automation (WA) agent, server and console instances running whenever and wherever? Yes, you can!   Starting from the Workload Automation version 9.5 fix pack 2, you can deploy the server, console and dynamic agent by using Openshift 4.2 or later version platforms. This kind of deployment makes Workload Automation topology implementation 10x faster and 10x more scalable compared to the same deployment in the on-prem classical platform.  Workload Automation provides you an effortless way to create, update and maintain both the installed WA operator instance and WA component instances by leveraging also the Operators feature introduced by Red Hat starting from OCP 4.1 version.  In this blog, we address the following use cases by using the new deployment method:  Download and Install WA operator and WA component images  Scheduling - Scale UP and DOWN the WA instances number   Monitoring WA resources to improve Quality of Service by using OCP dashboard console  Download and Install WA operator and WA components images  WA operator and pod instances prerequisites  Before starting the WA deployment, ensure your environment meets the required prerequisites. For more information, see https://www.ibm.com/support/knowledgecenter/SSGSPN_9.5.0/com.ibm.tivoli.itws.doc_9.5/README_OpenShift.html  Download WA operator and components images  Download the Workload Automation Operator and product component images from the appropriate web site. Once deployed, the Operator can be used to install the Workload Automation components and manage the deployment thereafter.  The following are the available packages:  Flexera (HCL version):   9.5.0-HCL-IWS_OpenShift_Server_UI_Agent_FP0002.zip containing the images for all HCL components. Download this package to install either all or select components (agent, server, console).  9.5.0-HCL-IWS_OpenShift_Agent_FP0002 containing the HCL agent image  Fixcentral (IBM version)  9.5.0-IBM-IWS_OpenShift_Server_UI_Agent_FP0002.zip containing the images for all IBM components. Download this package to install either all or select components (agent, server, console).  9.5.0-IBM-IWS_OpenShift_Agent_FP0002 containing the IBM agent image  Each operator package has the following structure  (keep...
article-img
Automation | July 27, 2020
Enforce Workload Automation continuous operation by activating Automatic failover feature and an HTTP load balancer
How important is that your Workload Automation environment is healthy, up and running, and there are no workload stops or delays? What happens if your Master Domain Manager becomes unavailable or it is affected by downtime?  What manual recovery solution you must do when it happens? How can I distribute simultaneously requests to several application servers in my configurations if my primary server is drowning? How can I hourly monitor the workload automation environment healthy in an easy way? How can I have an alerting mechanism?  The answer is: Workload Automation 9.5 FP2 with Automatic failover feature enabled, combined with NGINX load balancer!  Let start to introduce the components participating to the solution:  = Workload Automation 9.5 FP2 introduces the Workload Automatic failover feature =  When the active master domain manager becomes unavailable, it suddenly enables an automatic switchover to a backup engine and event processor server. It ensures continuous operation by configuring one or more backup engines so that when a backup detects that the active master becomes unavailable, it triggers a long-term switchmgr operation to itself. You can define potential backups in a list by adding preferential backups at the top of the list. The backup engines monitor the behaviour of the master domain manager to detect anomalous behaviour.  = NGINX load balancer = Load balancing across multiple application instances is a commonly used technique for optimizing resource utilization, maximizing throughput, reducing latency, and ensuring fault-tolerant configurations. It is possible to use NGINX as a very efficient HTTP load balancer to distribute traffic to several application servers and to improve performance, scalability and reliability of web applications. Nginx acts as a single-entry point to a distributed web application working on multiple separate servers.   Let continue analyzing our use case solution:  We experiment the solution by defining and using this environment during the formal test phase for 9.5 FP2 project. ...
Close