skip to Main Content

Tintri Blog

Don’t Know Much About Hyperconverged

May 22, 2015

Hyperconverged infrastructure may seem to have some advantages, such as easy scale-out, component standardization and management efficiency–but these don’t quite make up for its shortcomings. For instance, storage resources reach their peak in hyperconverged settings. But then you have to add another appliance, since compute resources only run at 25% utilization. And that’s ignoring the potentially huge challenge of tuning disk storage for a particular application.

Of course, it’s this that begs the question: how might we look for an application-centric solution? After all, each application has its own I/O profile, and latency plays a crucial part in this. In fact, IOPS numbers mean nothing if you don’t have insight on the latency, read/write percentage and I/O size; if a storage array shows a higher latency for a longer period of time, it predicts catastrophic consequences for application performance. A database instance will read many IOPS around 10 microseconds and is very sensitive to latency. Conversely, redo logs will want to have a write and acknowledge under 5 microseconds, whereas a high frequency write will ask for less than one 1 microsecond latency. If a database administrator can’t perform her redo log write quickly during a single write, everything is put to a halt until that write is done.

Moreover, high latency in a virtualized business critical application will have a downwards effect for all the virtual machines on the storage array, potentially causing high quantities of data to end up in the “I/O Blender.” This is a phenomenon that happens when multiple VMs simultaneously send I/O streams to the hypervisor to be processed. With heavy workloads, the I/Os are often sequential but become random because of this blender effect. This has a negative effect on the read/write operations on the disks, causing higher latency. Placing more SSDs or overprovisioning your storage capacity for the application might help you solve this– but will cost you in time and money.

The best way to treat this issue is to examine each individual application and its end-to-end behavior, meaning on the host, network and storage level. From a storage perspective it’s desirable to supply this mission critical application with the resources it requires with a latency of less than 1 millisecond. This means that the appliance is so intelligent that it has a continuous overview of all the VMs on the system and of their I/O behavior.

You can do this with the working set analyzer, which performs continuous tracking and provides a 99% hit rate flash on a block size of 8kb. All non-active data will automatically land on HDD. With inline deduplication and compression techniques on the Flash tier and compression applied on the HDD, you can handle 2 to 2.5 times more data than the actual capacity. This is also called effective data. In addition, there’s the ability to implement VM policies for snapshots, clones, replication and encryption, which can be determined per application and/or vDisk.

By applying VM-Aware or Application-Aware Storage, the hyperconverged dependency on the compute component no longer exists, therefore avoiding vendor lock-in. More importantly, each application gets the resources it needs without compromise, even when they need them with less than 1 millisecond latency. The insulated workloads all get the right Quality of Service.

You shouldn’t have to worry about storage management when you should be worrying about virtualized applications. And when you have an application that performs well under all circumstances, you’ll satisfy your customers–and yourself.

Back To Top