- KEY TAKEAWAYS
- As IT teams increase consolidation, Storage QoS becomes increasingly important.
- QoS implementations must be automatic and at the same time provide the required control to set both a floor and a ceiling on workload IO, and operate at VM or container-level granularity.
- Tintri combines Auto-QoS for ease of use and manual knobs for control with the full visibility of predictive analytics to simplify performance management for busy IT teams.
A Better Approach to Performance
Tintri Auto-QoS delivers predictable performance for every VM, while giving you the control to explicitly guarantee performance for critical workloads and limit the resource consumption of less important workloads. As a result, different workloads—including traditional enterprise apps and next-generation apps and services—can be consolidated on the same storage system without fear of noisy-neighbor effects or a need to over-provision.
Our next post will look at VM Scale-out and explain how it optimizes VM placement across multiple storage systems automatically, once again without sacrificing fine-grained control.
Tintri VMstore Auto-QoS
Tintri VMStore Auto-QoS is designed to deliver good performance for every VM without having to explicitly configure each one. Auto-QoS works at the VM level and automatically keeps noisy neighbors from interfering with other workloads so you don’t have to worry about load balancing or the placement of different types, sizes, or numbers of VMs inside LUNs. Tintri Auto-QoS is designed to do the right thing, thereby greatly simplifying or eliminating management while giving you fine-grained control when you need it to achieve specific objectives.
It accomplishes this with:
- Per-VM performance isolation. The Tintri QoS scheduler maintains an IO queue for every VM, using each VM’s IO request-size and per-request overhead to determine the cost of every IO in the system. I/O from each IO queue is scheduled proportionally into the pipeline for execution, ensuring resources are allocated fairly. It’s important to note that storage that lacks visibility at the VM level cannot offer this capability.
- Per-VM performance protection. Minimum and maximum performance settings can be enabled on individual VMs or sets of VMs should that become necessary, giving you fine-grained control when you need it.
Are you over-provisioning storage or spending hours on performance tuning to satisfy SLAs? Tintri Auto-QoS delivers guaranteed performance.
The Tintri enterprise cloud platform is designed for autonomous operation, freeing your IT team from much of the hard work of infrastructure management and allowing you to focus on higher-value tasks.
Under normal operation, Tintri delivers good performance for every VM. Manual QoS controls, if configured, are invoked to enforce specific guarantees and/or limits. Full visualization into performance at the VM level makes it simple to identify any performance issues and take action should that become necessary.
This approach makes QoS very easy to use. Many sites find they don’t need to do any configuration at all, but you can take advantage of unique features in a wide variety of circumstances such as:
- Throttle a rogue VM. A rogue VM that’s using too much performance won’t cause the kind of performance degradation on Tintri storage you’d see on traditional storage. Nevertheless, throttling may still be desirable for certain use cases such as for Cloud Service Providers.
- Guarantee performance for critical VMs. By setting minimum IOPs for a VM or set of VMs, you can ensure your most important workloads always get the resources they need including times of contention when Auto-QoS might not do the right thing.
- Establish tiers of service. Tintri QoS makes it simple to create tiers of service and implement chargeback without requiring different storage for each tier. Service Providers—as well as enterprises moving to an IT-as-a-Service (IaaS) model—love this ability.
- Pinpoint the sources of latency. Simple but powerful visualization lets you see exactly where latency is coming from, allowing you to immediately zero in on the source of an issue and simplify troubleshooting.
With autonomous operation, software takes care of infrastructure management. The first post in this series, “Autonomous Operation Simplifies Storage Management,” provided an overview of Tintri’s autonomous capabilities and our philosophy regarding autonomous operation. This post drills down further on Auto-Quality of Service (QoS).
The Importance of QoS for Storage Performance
One of the priorities for most IT teams is to modernize infrastructure and consolidate workloads to streamline operations. This is where QoS comes in. If you can’t deliver predictable performance for workloads sharing the same storage, your ability to consolidate is severely limited. In the absence of QoS, the only option is to over-provision storage—effectively putting you right back where you started.
Many storage vendors have begun offering QoS features, but implementations vary widely. So, what characteristics should storage that offers QoS have? There are several factors to consider:
- Auto-QoS. Just as with any knob, you have to understand how and when to turn it. QoS should be turned on out of the box so that it works automatically for a majority of situations without requiring configuration.
- Granularity. Most QoS implementations work at the granularity of the LUN or volume. As Auto-QoS kicks in to punish the noisy neighbors, this may unnecessarily impact VMs that are not the troublemakers. Setting QoS on a LUN containing 10-40 VMs might not be that meaningful; you may have to organize your storage carefully to get a real advantage from QoS under these circumstances.
- Ceiling. Most QoS implementations allow you to set a ceiling on the number of IOPS or the maximum bandwidth that can be consumed.
- Floor. Much less common is QoS that lets you guarantee a minimum number of IOPS or bandwidth that will be available. This guarantees a minimum level of performance is reserved for each workload. This is extremely important, especially for cases when Auto-QoS starts to throttle critical workloads
- Analytics. You won’t get the full benefit of QoS without analytics to help you understand your operations and act accordingly as well as understand the effect of turning a knob in real time with clearly defined buckets for QoS.
A final consideration is how easy QoS is to configure and use. You may have hundreds or thousands of VMs in your environment. If you have to explicitly configure QoS limits for each VM, things can get complicated quickly.