Like our clients, most of you enjoy the many features of our Unica product suite. We at HCL are committed to expanding product capabilities and responding to client requests. You will be happy to hear that HCL is investing in containerization options for the Unica platform. To understand containerization, we must first understand virtualization. Many of you already use virtualization in your IT infrastructure to deploy Unica products and will appreciate the increased flexibility, scalability, maintainability, and cost savings. Containerization is functionally similar to virtualization but much lighter, as it leverages the host machine’s operating system functionality. This makes it smaller in size and faster to start up.

Docker is currently one of the most popular containerization technologies and an industry buzz. We at HCL understand our customers need for greater usability, flexibility, and high level of stability. We strive to provide all of this and more with the latest technology.  As Docker becomes more popular along with the availability of tools like Kubernetes and Helm, it has become a boon for IT professionals who have the requisite expertise. At HCL, we feel it is the perfect time to add this option to Unica offerings. This not only includes pre-built Docker images but also leverages the orchestrating capability of Kubernetes and Helm. For those who are wondering if it is the right time to make this transition, we list some advantages of Dockerization that will help you decide if it would be the right choice for you.

  1. Easier to install and configure

With traditional on-premise software offerings, customers have to download the installation binaries, run the installer, and perform the configuration. There are many steps in this process, and hence, typically, customers require assistance from Professional Services. With the Docker approach, you will get a set of files which can be launched with a single Helm command. It will automatically download the proper pre-installed and pre-configured Docker images, do the installation and start-up the applications. This process significantly decreases the dependency on your IT department.

​*This diagram is for the illustration of the cluster contents only. Shapes with dashed lines are failover containers, running only when the primary containers stop.
2. Built-in failover –In the configuration file, we can specify how many instances we will need to run with Kubernetes keeping watch over the whole environment. Once an instance shuts down, it will automatically start a new instance. This significantly decreases downtime and the pressure on your DevOps team.
3. Auto-scaling – Kubernetes has support for auto-scaling. All you need to do is set when to ‘scale-out’ and ‘scale in’ by setting a threshold. Kubernetes will monitor CPU utilization and dynamically add or remove running Docker containers, depending on the hardware available. If you would instead control this yourself, you can define the number of running instances in Kubernetes, and this will be maintained automatically.
4. Autoload balancing –When we start a cluster with multiple running instances for Interact run time or Campaign servers, we will have a built-in load balancer for routing the requests. Therefore, your IT team will not need to install and configure a dedicated load balancer. When new instances are added in the cluster, or running instances are removed from it, the load balancer will make the adjustment automatically.
5. Smoother upgrade –Whenever HCL has a new version of Unica platform, a new set of Docker files will be released. All you will need to do is, run a single script, which will upgrade the database schema and replace the old application instances with the newer version. During this process, a rolling upgrade methodology is used to ensure minimum downtime. Also, the state of the new instance is automatically monitored, and if it fails for any reason, the cluster will be rolled back to the previous version.
6. Standardization – Using Docker containers ensure consistency across multiple environments. As long as you deploy the same Docker image, you know it will behave in the same manner no matter where it is hosted. Even if you wish to customize your Unica installation, you can update the Docker files in one place and then use the same image in all your deployments. There is no need to make the same changes in multiple installations. You can also use a version control tool to archive those changes in case you need to reference previous work.
7. Continuous deployment and testing –With this improved upgrade approach, you can break down significant changes into multiple small steps. Then test each of these steps, and be confident that those changes will be reliably deployed into production. On the other hand, if something goes wrong in production, that Docker image can be copied to a test environment to reproduce the issue, and analyze the cause. Then test and fix without worrying about the hassle of setting up an identical environment. ​
8. Hardware Elasticity- As described earlier, Docker containers separate applications from the host operating system and hardware, so you can reuse the same Docker image even when you change your hardware configuration. This separation also allows you to dynamically add or remove hardware to fit your technical and budget needs by simply adding or removing the number of running containers. Kubernetes takes this separation even further by hiding the server details from the applications. For example, when a busy shopping season arrives, which will last only a short period, you can add a few servers and notify Kubernetes that those servers are available. And Kubernetes will deploy Dockerized application containers to those servers. Once you are past the increased load, you simply notify Kubernetes those servers are unavailable it will adjust the deployment and scale down.

9. Cost Saving – As you can see, the Docker offering for Unica will save you time and reduce installation and upgrade costs. Also, thanks to elasticity, you can save on hardware costs as well. ​
10Cloud Readiness – Moving services to the cloud is becoming more common because of the various benefits it offers. If your organization decides to move applications to the cloud, or simply wish to carry out a prior test run, Dockerization makes this easy. This is true even with customized Docker images because the Dockerization feature of Unica products is designed to be technology agnostic, i.e., it would work on multiple cloud providers such as AWS, GCP, Azure, or even a private cloud hosted in your own IT infrastructure.
Author Bio – Su Su is a principal architect for Unica suite. He has been working on Unica products for 9 years with the focus on real-time distributed systems. Write to us at to know more about how you and utilize containerization offering and benefit from this.
Comment wrap
Further Reading
Marketing & Commerce | September 17, 2019
Unica Director – ActiveMQ Configuration for Flowchart Monitoring
Unica Director provides the users with the capabilities to monitor Campaign flowcharts. This allows the user to see flowchart execution data like (running, stopped, paused, completed, failed), etc. Additionally, the user can see the details like start time, end time, and the time taken by each process box execution. Monitoring also allows the user to check the query execution underlying to each process box. This blog explains - How flowchart execution details become available in Marketing Software Director using Active MQ? Before we start discussing on how to configure ActiveMQ, I suggest taking a look at preparations that are needed beforehand. You can refer to blog - How to get ready to install Unica Director? How it happens? Flowchart execution data is important for the marketer to plan flowchart execution activities better. A marketer needs a lot of details to be on top of execution monitoring. Moreover, this information also allows the marketer to resolve the execution failures and tune the flowchart executions better. This vital information consists of: List of all flowchart being executed Flowchart execution start/end time Flowchart execution status (running, stopped, paused, completed, failed, etc.) Flowchart process boxes execution statues Process box execution start time / end time Underlying queries being executed and time is taken by each database query in process box execution This information is primarily available with Campaign application (with Campaign analytical server). To share this information with Director, it is internally passed from Campaign analytical server to Campaign Web application. Irrespective of if the user has configured Director, or not this information would be available with Campaign web application. Campaign web application can share this information via message queue (in this case, Active MQ). This does not have any restriction on the user to configure Active MQ on Campaign web application. Active MQ...
Marketing & Commerce | August 5, 2019
How to get ready to install Unica Director?
Overview: Enterprise applications like Unica Campaign are usually deployed across complex architecture. For example – If the Campaign web application is installed on one machine (or storage), then the Campaign Analytical server (listener) is installed on a separate machine. The Campaign web application is deployed in standalone application server instance or in a clustered application server deployment. This is also possible for the Campaign Analytical server which can be deployed in the clustered environment. Now many of you would be asking; why we are talking about this deployment structure while planning to Install Unica Director. Let me try to explain this. For reference, we will be taking an example of the deployment architecture below to explain the Unica Director Server and Agent installation requirements. This reference deployment architecture includes a Load Balancer (or webserver), a clustered deployment of the Campaign web application and a clustered installation of the Campaign Analytical Server (listener) components. Unica Director – Server and Agent: Unica Director communicates in a server and client topology, the Server and Agent are two different components. The Server can be installed on any machine, this machine should not be a dedicated system but needs to be in the same subnet as Campaign components are deployed. The Server application provides capabilities like user interface, authentication, job queues, environment management, etc. The Agent should be installed on the system running the Campaign components, for example, the Agent needs to be installed on the system running Campaign Analytical Server. If the Campaign Analytical server is installed in a cluster, then the Agent needs to be installed on all machines. Also, the agent needs to be installed on a machine running the web application server where Campaign is deployed. If the web application is clustered, then the agent needs to be installed on all...
a/icon/common/search Created with Sketch.