Powering Cloud-Native Apps
Data Management | April 7, 2021
The discipline of data security manifests through many capabilities that protect information at rest, in motion, and in use. The average cost of a data breach in 2019 was calculated at $3.92 million which involved 25,575 accounts that year according to a report by the Ponemon Institute and IBM Security. High-profile companies such as Capital One, Evite, and Zynga experienced data breaches that exposed more than 100 million customer accounts each. Data breeches must be disclosed to customers, so they can be costly events that result in multi-million dollar lawsuits and settlements. Many organizations realize two things, the value of data and the cost to protect data are increasing simultaneously. This makes protecting data with patches a very cost-prohibitive solution. More security layers and patches only add an equal cost to the overall situation. Instead, IT teams must design and implement the right data management strategy from the beginning and, select the right solutions. To make matters worse, the median data volume that companies currently have under management – in both structured and unstructured formats – is now greater than 630TB, which is expected to exceed 820TB within two years. Data breeches must be disclosed to customers, so they can be costly events that result in multimillion-dollar lawsuits and settlements. One best practice area for any data security strategy is database protection. This includes such things as monitoring database activity to detect unusual user activity, as well as conducting regular access reviews to identify old and unnecessary permissions. Both of which OneDB permits through its OneDB Explore front-end UX tool. It includes encrypting data, which OneDB does at various levels: column level; all or selected database storage units – dbspaces and subspaces; backup archives; client-server & server-server communication. And of course, enforcing the least amount of privilege possible to carry out a function policy is also prudent. OneDB is a safe, cost-effective and efficient choice for data management. To learn more about how the HCL OneDB data platform will support your data security strategy, visit us at https://hcltechsw.com/products/onedb.
Data Management | March 23, 2021
Becoming Data Driven
We are living in the data age. Enterprises today are generating data at an unprecedented pace and require the ability to store and utilize data like never before. But where is the data going? Everywhere. For many enterprises, data sprawl is a real challenge. Thanks to modern technology advancements, the complexity created by data sprawl is compounded by an enterprise’s continuing need to manage legacy systems, deploy modern systems, and respond to changing business conditions in a timely fashion. Enterprises are proactively undertaking strategic efforts to use data to make more informed business decisions, operational improvements, organizational changes, and enhancements to the customer experience. Consider all the different data types that enterprises are analyzing (image below). The median data volume that companies currently have under management – in both structured and unstructured formats – is now greater than 630TB, with that number expected to exceed 820TB within two years. Moving Past the Barriers The systems, data types, and the analytical processes enterprises needed to execute against their data, are likely to evolve. While it would be difficult to predict what will be popular in the future, we do know that machine learning will see significant adoption over the next few years. Machine learning thirsts for data; multiple data types from numerical, text, images and more are used to train and build machine learning models. For enterprises, it’s imperative to take the long view on data and data platform systems. It is rarely feasible to adopt and implement every new technology or system that comes along. It’s been proven repeatedly that polyglot persistence might be good for one-off workloads, but that model soon breaks down at scale. HCL’s cloud native, multi-model database, OneDB, is designed to overcome those limitations and scale to meet future data demands, while significantly lowering TCO. Why Choose HCL OneDB OneDB provides a rich multi-model data platform for your...
Powering Cloud-Native Apps with OneDB
What is your company's strategy to managing the growing demands of continuous data and support cloud-native app development at today's rate? For many leaders they seek to modernize their data platform strategy to meet these challenges. OneDB is a feature-rich and equally able to serve as the foundation for cloud solutions, embedded applications, and IoT or edge solutions. Whether you're ready to build brand cloud-native apps, rehost or re-platform applications to take advantage of the destination platform, HCL OneDB will set you at ease with it's multi-model, cloud-native capabilities, one step at a time. HCL OneDB is well known for its reliability, performance, and simplicity. Whether deployed on premise, public cloud, or private cloud, clients will be able to gain further advantages. Many of the unique advantages include: Always-on Transactions - Keep data available at all times, including zero downtime for maintenance and unplanned outages. Increased Productivity - Stable multi-model data management allows you to focus and quickly deliver the right type of data for the business solutions you need. Detecting Patterns - HCL OneDB is optimized to find anomalies and deviations for predictive analytics in Spatio-temporal data. Ease of Use - OneDB Explore, our modern graphical administration and monitoring tool for HCL OneDB gives you the ability to monitor what is critical, and take action on what is necessary to keep your business running smoothly. To learn about OneDB's key capabilities visit our website or download our datasheet here.
Data Management | October 8, 2020
HCL OneDB – Tackling Performance Issues
Nowadays, having performance issues in the database has become a kind of cliché. As new data is getting piled up every minute, it is quite normal that the server is going to overload which ultimately affects the database performance. But what if a single command would direct you to the root cause and tackling such problems becomes easy? This blog will explain more about the following command “onstat -p”. This single command gives all the details about engine statistics, based on its output you can decide where exactly the database performance is lagging. The command displays a performance profile that includes the number of reads and writes, the number of times that a resource was requested but was not available, and other miscellaneous information. Below is a sample screenshot of the output: Onstat -p output varies as shared memory access disk memory. For example when we fire a simple SQL query let’s say “select * from abc” and if abc pages are not available in buffers then shared memory will access the disk to fetch “abc” table pages and that’s how to output data will change based on a number of disk access attempts. The first portion of the output describes reads and writes. Reads and writes are tabulated in three categories: from disk, from buffers, and the number of pages (read or written). Let’s understand the above output line by line: The status line provides the following information: The name and version of the OneDB server product The current mode of your server -> (Prim) stands for primary, each server modes ( quiescent, single-user, fast recovery, etc. ) serves a separate purpose. The length of time since shared memory was last initialized The size of the shared memory -> Detailed information about shared memory distribution can be acquired using “onstat -g seg “. 2. Whatever transactions are happening, all of them will be stored in buffer pools. Every data of the transaction goes through buffer pools. Let's say you...
How Open Global Transactions Make Your Secondary Stuck in Fast Recovery And How To Terminate Those Global Transactions
What is Global Transaction? A global transaction is a transaction that involves more than one database server. HCL OneDB database servers support two types of global transactions: TP/XA with a transaction manager and two-phase commit. HCL OneDB uses a two-phase commit protocol to ensure that distributed queries are uniformly committed or rolled back across multiple database servers. Global Transaction needs to be terminated when your secondary server is stuck in Fast recovery mode and is not coming online. So, we need to locate and terminate global transactions. Sometimes, GT can be terminated gracefully or not, based on their FLAGs. Scenario 1: Restarting Updateable secondary after a crash will get stuck in fast recovery mode until all open transactions are processed. Global transaction can be terminated gracefully. In this scenario, your Updateable secondary was crashed due to several reasons and upon starting, it gets stuck in fast recovery mode. The message below is in online.log of sds node: 12:13:04 Started processing open transactions on secondary during startup The secondary will not be operational until all the global transactions were cleared. The message above shows it is incomplete. The secondary will allow new sessions only if you see the completed string in the log. 20:10:05 Finished processing open transactions on secondary during startup. Example In the below example SDS was stuck in FR mode for almost 8 hours. We should look for the output of onstat –G from both primary and secondary. They should have different addresses in memory, but they can be identified by the "data" column. The Flag should have ‘H’ at the 3rd position, which means it was heuristically rolling back or rolled back. We can zap them using onmode –H 0x61fbe988 and onmode -H 0x61fbecf0 on SDS node. Immediately you will see your SDS will be in operational...
Save Your Money Using Data Compression
What is Compression By minimizing the disk space that is used by your data and indexes, it’s easy to save money Helps improving I/O Ability to store data rows in compressed format on disk Saves up to 90% of row storage space Ability to estimate possible compression ratio Fits more data onto a page Fits more data into buffer pool Reduces logical log usage How IDS Storage Optimization works! By considering the entire row and all its columns IDS looks for repeating patterns and stores those patterns as symbols in a compression dictionary By considering the entire row and all its columns IDS looks for repeating patterns and stores those patterns as symbols in a compression dictionary Creating a compression dictionary Compressing the data in table After creating the dictionary, IDS starts a background process that goes through the table or fragment and compresses the table or fragment rows. The process compresses each row and leaves it in the page where it was compressed. Any new rows that are inserted or updated are also compressed. This compress operation runs while other transactions and queries are occurring on the table. Therefore, IDS performs the operation in small transactions and holds locks on the rows being actively compressed for only a short duration. Reclaiming free space After all the rows have been repacked, the shrink operation removes the unused table or fragment space and returns free space to the dbspace that contains the table or fragment. What we are using behind the scene! Lempel-Ziv (LZ) based algorithm – static dictionary, built by random sampling Frequently repeating patterns replaced with 12-bit symbol numbers Any byte that does not match a pattern is also replaced with a 12-bit reserved symbol number Patterns can be up to 15 bytes long Max possible compression = 90%...
No articles to display at the moment. Amazing content in the works!