In this blog, we are going to walk you through enabling session affinity for the Dynamic Workload Console (console) deployed in a Google Kubernetes Engine (GKE) cluster that uses an HTTP(S) load balancer network service that leverages an instance of Google Cloud SQL for SQL Server managed database.

In a  GKE cluster environment, a backend service defines how the HTTP(S) cloud load balancing network service distributes incoming traffic. By default, the method for distributing new connections uses a hash calculated based on five pieces of information:

  1. The client’s IP address.
  2. The source port.
  3. The load balancer’s internal forwarding rule IP address.
  4. The destination port.
  5. The protocol.

​You can modify the traffic distribution method for HTTP(S) traffic by specifying a session affinity option.

Let’s start by asking a couple of questions about the Workload Automation (WA) deployment in a GKE cluster regarding  the console network.

When hundreds of users are logged into the web console, how are inbound traffic requests handled?
How is the inbound traffic from the console clients redirected to the multiple console instances installed as a pod in the cluster?

As for the Kubernetes proxy models, the WA traffic bound for the service’s IP:Port is proxied to an appropriate backend without the clients having any knowledge of Kubernetes, services, or pods.

If you want to be sure that all connections from a particular WA console client always pass to the same WA console pod, you can set the session affinity based on the client IP addresses by exposing the Load Balancer Session Affinity service type in the configuration file of your WA deployment.

Continue reading this blog to discover exactly how to do that!

Configure the WA console with session affinity 

For more information about where you can download WA containers to install, or the related helm chart, see the appropriate readme file:

  • HCL customers:

  • IBM customers:
To deploy the Workload Automation console and enable session affinity, you simply expose the LoadBalancer_SessionAffinity service type. This can be done by editing the values.yaml file as follows.

NOTE: This type of service is only available for the console.

With session affinity enabled, you can be sure that you are always connected to the same console pod, keeping your session always active. In this way, you can continue to automate your workload without interruption.
Configure the WA console with Google Cloud SQL for SQL Server 

Embrace the power of Google cloud native services such as Google Cloud SQL. Workload Automation supports the installation of the server and console on Cloud SQL for SQL Server. To take advantage of the flexibility of the Google Cloud Database. Check this out!
From the Google Cloud Platform Console, search for the “cloud sql” resource and create a new SQL Instance of the SQL Server.

Once your database instance is up and running, you can customize the values.yaml file with the information of your new database.

​To install the Dynamic Workload Console on Cloud SQL, configure your deployment as follows:

To also install the server component on Cloud SQL, configure your deployment as follows:​

Configure the WA server with an internal or public load balancer 

If you need to manage traffic across multiple servers in your GCP cluster, you can opt for an internal or public load balancer.

  • To deploy the Workload Automation server with a public load balancer, specify LoadBalancer as the service type.

  • To deploy Workload Automation server with an internal load balancer, specify LoadBalancer as the service type and expose the internal load balancer annotation for GKE,  as shown in the following figure:

Deploy your Workload Automation configuration for the console and server 

After you have completed customizing the values in your values.yaml file , including the values explained earlier in this blog, you are ready to deploy your Workload Automation environment, including the console, on your GKE cluster.
For more information about how to deploy, read the following README files:



We hope you enjoyed this article, and that you will take the time to try out a configuration of this kind. You won’t regret it. Send us your feedback and comments, they help us to provide you with useful content!

Do not hesitate to reach out to us for any questions or doubts!


Federico Yusteenappar, Workload Automation Junior Software Developer, HCL Technologies

Federico joined HCL in September 2019 as Junior Software Developer working as a Cloud Developer for the IBM Workload Automation product suite. His main focus has been the extension of the Workload Automation product from a Kubernetes native environment to the OpenShift Container Platform. He has a Master’s degree in Computer Engineering.



Pasquale Peluso, Workload Automation Software Engineer, ​HCL Technologies

Pasquale joined HCL in September 2019 as a member of the Verification Test team. He works as a verification tester for the Workload Automation product suite on distributed and cloud-native environments. He has a Master’s degree in Automation Engineering.


Davide Malpassini, Workload Automation Technical Lead, HCL Technologies

Davide joined HCL in September 2019 as a Technical Leadworking on the IBM Workload Automation product suite. He has 14 years of experience in software development, and he was responsible for the extension of the Workload Automation product from a Kubernetes native environment to the OpenShift Container Platform and REST API for the Workload Automation engine. He has a Master’s degree in Computer Engineering.


Filippo Sorino, Software Developer, HCL Technologies

Filippo joined HCL in September 2019 as a Junior Software Developer and works as a Verification engineer for the IBM Workload Automation product suite.  He has a Bachelor’s degree in Computer Engineering.


Serena Girardini, Verification Test manager, HCL Technologies

Serena is the Verification Test Manager for the Workload Automation product in distributed environments. She joined IBM in 2000 as a Tivoli Workload Scheduler developer and she was involved in the product relocation from the San Jose Lab to the Rome Lab during a short-term assignment in San Jose (CA).  For 14 years, Serena gained experience in the Tivoli Workload Scheduler distributed product suite as a developer, customer support engineer, tester and information developer. For many years, she maintained the role as L3 fix pack release Test Team Leader and, during this period, she was a facilitator during critical situations and upgrade scenarios at customer sites.  In her last 4 years at IBM, she became the IBM Cloud Resiliency and Chaos Engineering Test Team Leader. She joined HCL in April, 2019 as an expert Tester and she was recognized as Test Leader for the product porting to the most important Cloud offerings in the market. She has a Bachelor’s degree in Mathematics.

Comment wrap
Further Reading
Automation | May 10, 2021
Enhance your Oracle E-Business Suite Scheduling Capabilities with Workload Automation
Oracle E-Business Suite as the name suggests is a suite of several application software mainly consisting of enterprise resource planning (ERP), customer relationship management (CRM), and supply-chain management (SCM) with the Oracle relational database management system (RDBMS) at its very core. Oracle E-Business Suite is one of the most popular solutions out there. Globally organizations use it to streamline their business processes and manage their day-to-day transactions and ultimately reduce costs and bring more efficiency.
Automation, Innovations | March 30, 2021
Everything you wanted to know about HCL Workload Automation
We, the Workload Automation family, love to take care of every detail. Indeed, we strongly believe that to deliver a good quality product we need to increase our knowledge every day more by discovering and studying new technologies.But the technical part is not the only important thing, we take care of the whole product, which means presentations, blogs, videos, documentation, client advocacy, design thinking, skill transfer, etc.To achieve these goals, effective collaboration and knowledge sharing are at the core. Thus, we arrange a lot of knowledge sharing programs, such as “WA Talks” and “Lunch&Learn”.
Automation, Innovations | March 15, 2021
Modernize your Workload Automation deployment to Azure with SQL Managed Instances
As a matter of fact, the deployment of Workload Automation (WA) to a Microsoft Azure cluster makes the implementation of the WA topology 10 times faster and 10 times more scalable compared to the same deployment to the classic on-premises platform, as is the case for Red Hat OpenShift, AWS EKS, Kubernetes and so on. The WA deployment to an Azure cluster can leverage all well-known cloud benefits.