@hcltechsw
Number of Posts: 4
Filter By:
Number of Posts: 4
article-img
Automation | August 19, 2020
Workload Automation – Customer-centric approach
A customer-centric company is more than a company that offers good customer service. Customer-centric is our HCL Software business philosophy based on putting our customers first and at the core of business in order to provide a positive experience and build long-term relationships. In today’s uncertain world, not even the best contract can capture what will change tomorrow. A contract can only convert to business advantage through a value centric relationship. In the Workload Automation family, we strongly believe in customer collaboration and we have several programs that helps us to nurture relationship with our customers and involve them in the product design and evolution. The Client Advocacy Program is aimed to accelerate customer’s success and to create strategic relationships with HCL’s technical, management and executive leaders. The mission of our Client Advocacy Program is to build a direct relationship with our customers. We really want to be able to hear their voice. The User experience (UX) design in HCL is based on the Design Thinking approach, that relies on users to stay in touch with real-world needs. We work with users to design and build the solution to their needs through continuous participation of the same users in the design process. We really want to bring the user voice in the product design and development.   What this actually means? We take care of the relationship with each customer, no matter of the program. The programs are often just the first engagement: everything can start from a specific request or by pure chance. From the very first meeting with our customer we focus on addressing her/his needs and building trust, no matter if it happens in an Ask the Expert or in a Design Thinking session. We have tons of successful stories that have started from a simple question or even complaint. The entire product team takes care of each customer by looking for the subject matter expert to answer each question. The Customer Advocates often are the first point of contact in the entire organization. They are the customer best buddy; they nurture the relationship with constant...
article-img
Automation | August 13, 2020
Automate Project Create, Delete & Update with Google Cloud Deployment Manager using Workload Automation
Do you need to create, delete, update a lot of Google Cloud Platform (GCP) projects? Maybe the sheer volume or the need to standardize project operation is making you look for a way to automate project managing. We now have a tool to simplify this process for you. Workload Automation announcing GCPDeoploymentManager Plugin. GCPDeoploymentManager Plugin automates the creation and management of Google Cloud resources. You can upload flexible template and configuration files to create and manage your GCP resources, including Compute Engine (i.e., virtual machines), Container Engine, Cloud SQL, BigQuery and Cloud Storage. You can use GCPDeoploymentManager Plugin to create and manage projects, whether you have ten or ten thousand projects, automating the creation and configuration of your projects with GCPDeployment Manager allows you to manage projects consistently.   Now, you can use GCPDeoploymentManager Plugin from workload automation to create and manage projects It allows you to specify all the resources needed for your application in a declarative format using yaml. You can parameterize the configuration and allow reuse of common deployment paradigms such as a load balanced, auto-scaled instance group. Treat your configuration as code and perform repeatable deployments. The user can focus on the set of resources which comprise the application or service instead of deploying each resource separately. It provides templates that allow the use of building blocks to create abstractions or sets of resources that are typically deployed together (e.g. an instance template, instance group, and auto scaler). These templates can be parameterized to allow them to be used over and over by changing input values to define what image to deploy, the zone in which to deploy, or how many virtual machines to deploy. Prerequisite for the plugins to work:   User should have a service account. And the service account should have access to...
article-img
Automation | July 27, 2020
Enforce Workload Automation continuous operation by activating Automatic failover feature and an HTTP load balancer
How important is that your Workload Automation environment is healthy, up and running, and there are no workload stops or delays? What happens if your Master Domain Manager becomes unavailable or it is affected by downtime?  What manual recovery solution you must do when it happens? How can I distribute simultaneously requests to several application servers in my configurations if my primary server is drowning? How can I hourly monitor the workload automation environment healthy in an easy way? How can I have an alerting mechanism?  The answer is: Workload Automation 9.5 FP2 with Automatic failover feature enabled, combined with NGINX load balancer!  Let start to introduce the components participating to the solution:  = Workload Automation 9.5 FP2 introduces the Workload Automatic failover feature =  When the active master domain manager becomes unavailable, it suddenly enables an automatic switchover to a backup engine and event processor server. It ensures continuous operation by configuring one or more backup engines so that when a backup detects that the active master becomes unavailable, it triggers a long-term switchmgr operation to itself. You can define potential backups in a list by adding preferential backups at the top of the list. The backup engines monitor the behaviour of the master domain manager to detect anomalous behaviour.  = NGINX load balancer = Load balancing across multiple application instances is a commonly used technique for optimizing resource utilization, maximizing throughput, reducing latency, and ensuring fault-tolerant configurations. It is possible to use NGINX as a very efficient HTTP load balancer to distribute traffic to several application servers and to improve performance, scalability and reliability of web applications. Nginx acts as a single-entry point to a distributed web application working on multiple separate servers.   Let continue analyzing our use case solution:  We experiment the solution by defining and using this environment during the formal test phase for 9.5 FP2 project. ...
article-img
Automation | July 27, 2020
Custom dashboard: the fifth element that gives you control over all of your environments
No matter if your environment is based on rock solid z/OS controller or on light weight and easily scalable docker instances, or if your distributed, on premises master and backup master are rocking your workload as fire and water.  Earth, wind, water and fire… if you want to have control over each element you need the fifth spirt: your custom dashboard!  It’s easy to create and customize your dashboard to have control over every single important aspect for you and your organization at a glance.  Each dashboard is composed by several data sources and widgets that can be customized and combined together in the new era of dashboards. (ref 15-Jun-20 blog post “Welcome to the new Era of Dashboards” ).  But you can also optimize your dashboard to monitor different kinds of environments all together. Let’s see how it works.  Cross-engine widgets  If you need an overview of the entire workload across all of your environments, you can use for example the Jobs count by status datasource in a pie chart to have a quick overview of how many jobs are waiting, running or ended in error or in successful state.  To make this datasource and widget works across multiple environment you need to add first an engine list.   D engine list and Z engine list are optimized for homogeneous environment, while for an hybrid (distributed and z/OS) environment you have to select the Engine list.  At this point you can add also the desired widget and customize all fields as you can see below.  Widgets based on datasource with pre-defined engine.  However, the best way to monitor hybrid environment is to use specific datasources for each engine.  For example, if you need to monitor the Critical jobs  Duplicate the Critical jobs by status datasource and name it after the...
Close