- KEY TAKEAWAYS
- Powered by Elasticsearch, Apache Spark and Amazon Machine Learning, Tintri Analytics takes data from running applications on your environment, simulates scenarios, and outputs detailed what-if analysis for your deployments.
- By mapping the performance of your individual applications and monitoring their request sizes, VMstore’s Auto-QoS helps you create profiles for different request sizes automatically.
- VM Scale-Out leverages machine learning to identify your applications’ storage capacity and performance needs and recommend actions based on those needs. All you have to do is hit “execute.”
With powerful machine learning algorithms, Tintri Analytics, VM Scale-Out, and quality of service make it easy to work smarter, not harder.
If you’re in an autonomous vehicle, you’re relying on a lot of things to keep you safe. Machine vision that can differentiate between stop signs and speed limit markers. Sensors that account for even the most miniscule maneuvers on the highway. And of course, real-time telemetry that can analyze, interpret and execute on millions of spontaneous stimuli.
If even one piece of your autonomous vehicle’s software fails, you’re in danger. If your software isn’t precise enough, or if your telemetry is delayed, or if it can’t read data at a granular level, your autonomous vehicle’s going to run off the road, and maybe even crash.
It’s the same with your enterprise cloud. That’s why Tintri’s bringing you powerful machine learning capabilities for Tintri Analytics, Tintri VM Scale-Out and Tintri VMstore Quality of Service. With powerful prediction and planning tools based on individual applications’ data, Tintri’s machine learning algorithms make sure you stay on the right track.
Because if you’re getting your infrastructure ready for the future, you better make sure you’ve got the tools for the job.
Tintri Analytics
When it comes to machine learning for analytics, our competitors like to talk big. They’ll brag about load projections, taking write IOPS sent to a volume on an array over the past 6 months to predict what the load on a particular application might have been like for a certain workload. And they’ll call that cutting-edge stuff that customers have never seen before.
Well, I’m sure their customers have never seen that before, but Tintri’s been doing more complicated, more granular analytics for years now with Tintri Analytics. And instead of relying on per-volume or per-LUN correlations and averages for each workload, Tintri Analytics looks at granular per-application analytics for its predictions. In other words, our autonomous vehicle navigates by considering every single factor in real time, instead of risking a crash with delayed, inaccurate sensors.
Tintri Analytics vs. conventional analytics
Let’s say you’re planning a large SQL deployment in the next three months. Normally, you’d have to do some guesswork about how this would affect your performance and capacity, even if your software is smart enough to map your LUNs.
But with powerful machine learning algorithms powered by Elasticsearch, Apache Spark and Amazon Machine Learning, Tintri Analytics takes millions of data points from real running applications on your environment, simulates scenarios, and outputs detailed what-if analysis that projects how your SQL deployment will affect your infrastructure … up to 18 months in the future. That all happens in nearly instantly as you enter and adjust your scenario.
That means no matter what you’re preparing for, you can tell your CIO far in advance exactly what you need to budget to make sure your applications are deployed successfully.
Good data makes good predictions
In any machine learning model, the quality, quantity and depth of available data, combined with the quality of the model, directly impact the accuracy and usefulness of the predictions. So naturally, conventional “predictive” analytics just can’t give you the granular data that would create 100% accurate future trend analysis. Even HCI vendors that have application granularity can’t tell how different workloads use storage performance—you need more than granularity to get accurate per-workload performance consumption.
Tintri Analytics not only uses granular, application-level data, but also constantly records new VMs, their changing sizes, space savings and growth rate, enriching your predictions as time goes on with up to three years of historical data. That’s how you really leverage machine learning for analytics.
VMstore Quality of Service
Here’s a question for those of you who aren’t using Tintri VMstore: how do you calibrate quality of service for each of your individual applications?
It’s a leading question, I know—per-application quality of service just isn’t possible for our competitors. But we’ve been offering it to our customers for years.
Tintri offers drag-and-drop QoS for your applications and service groups, so you can remove noisy neighbors and take down rogue applications. But like any good engineer, we like to automate. And autonomous self-driving QoS could be your favorite thing about Tintri.
Calibrating for your own applications
Auto-QoS in VMstore lets you predict and set QoS automatically for each individual application based on the request size of your VMs. To calibrate auto-QoS, we use—drumroll—machine learning. Let’s walk you through how we do it with the application-level Tintri CONNECT architecture:
- VMstore maps the performance of every individual application in your array. Just as we do with Tintri Analytics, our machine learning algorithm skips correlations and digs right into the level of abstraction that matters: your virtualized cloud applications.
- VMstore constantly monitors your applications’ request sizes. Tintri might take a request size of 8k and another of 256k. Then, machine learning algorithms will learn how many requests are coming in for those sizes. Before you know it, machine learning will have picked up an accurate pattern.
- Based on those patterns, VMstore creates profiles for different request sizes, automatically. After understanding the patterns of your 8k and 256k request sizes, Tintri machine learning can extrapolate for other request sizes, knowing exactly how much QoS anything coming in at this block size will need.
The bread and butter of your filesystem
Manually adjusting your QoS for each request size is no longer necessary. With Tintri Auto-QoS, just as with Tintri Analytics, Tintri’s granular data and powerful machine learning algorithms do the manual labor, so you can focus on the hard stuff.
VM Scale-Out
Conventional storage scale-up and scale-out solutions are expensive and hardware-dependent, and require a team of storage Ph.Ds to manage them. That’s why Tintri VM Scale-Out scales out your storage the same way you scale out compute in your enterprise cloud and virtualized environment: by just adding another server and letting the hypervisor manager optimize VMs across the pool.
That’s the real magic of VM scale-out: optimizing pools of storage based on real application needs and machine learning algorithms so your team can concentrate on what your application needs instead of babysitting your storage.
How it works
- VM Scale-Out identifies your applications’ storage capacity and performance needs. It all comes back to Tintri’s application-level CONNECT architecture. When you and Tintri can see across compute, network and storage at the application level, you’re never at a loss for what your applications want.
- VM Scale-Out recommends actions based on those needs. Tintri finds the best location for every individual application, using—you guessed it—powerful machine learning algorithms that check the cost to move an application, its historical resource consumption and other factors. Want to make changes? Edit recommendations and VM Scale-Out learns from you. Ready to go? Just hit “execute.”
- VM Scale-Out predicts the outcomes of its recommendations. When you hit “execute,” VM Scale-Out will let you know the results of your VM placement, and how long it’ll take to execute. And with storage live migration offload, it’ll take minutes rather than hours.
More on machine learning to come
That’s just a taste of machine learning at Tintri. And we’re just getting started.
Our distinguished competition can use machine learning to calculate performance headroom at the array level, or create “what-if” analysis per LUN one year into the future. Tintri’s been built from the ground up to measure per-workload performance consumption, calculate performance and capacity for the next three years, and automatically set quality of service or optimize placement … all at the application level.
We know this is a high-level overview, so if you’re ready to go for the technical details, stay tuned. We’ve got a whole machine learning series coming up. In the meantime, check out the rest of our launch blogs. There’s a lot to dig into!