Tintri Predictive Analytics
Plenty of storage players claim to offer predictive analytics, but they are not useful for making decisions about applications. Why? LUNs and volumes.
Each LUN or volume is used for the storage of tens or even hundreds of VMs. Conventional storage can tell you the average performance of a LUN or volume. But there is a big problem with using averages. Think about what happens when Bill Gates visits a McDonalds. The average net worth of the restaurant’s patrons is in the billions. You get the same issue when you look at average LUN performance. It may not tell you much about the data for an individual VM. In contrast, Tintri VM-aware storage maintains detailed data about an individual virtual machine, and provides the right information you can use to make decisions.
With our new predictive analytics you can profile classes of applications—grouping (for example) all your SQL servers. You can see their average use of capacity and performance and then drill into each individual SQL server to spot outliers that need attention. When you’re asked to add 20 more SQL servers to your footprint, you can use information about your current profile to model the impact on your footprint.
And with total VM-level visibility, Tintri can forecast your need for storage capacity and performance with precision. You can analyze up to three years of historical data or zoom into more recent patterns. And this is no chore—we’ve used the latest Apache Spark and Elastic search technologies, so crunching data on half a million VMs takes less than one second.
Our scale-out storage platform
Scalability is the Holy Grail of data center storage. The dream is that you can start with a small storage configuration and grow it incrementally to a very large environment. Achieving this goal has been challenging because it is very difficult to meet multiple constraints. The ideal scalable storage needs to be cost-effective, high-performance and robust in the presence of failures. It should also be easy to manage at small and large scale. Finally, it should provide data management features like snapshots, replication and cloning. I would argue that no product has yet met all of these constraints for scalable primary storage.
The problem of scalability has also become harder over the last 5 years. Driven by the cloud, the idea of what “large scale” means has grown. Large scale is now tens or hundreds of thousands of applications, not hundreds or thousands. In addition, because of the use of flash, performance expectations have increased, and sub-millisecond latency is expected.
Today, we are launching a new scale-out storage platform that we believe will transform data center storage. It is designed for the modern data center where applications are virtualized and the storage is all flash. With the scale-out platform, you can start with a single 17 TB all-flash node and grow to 10 PB. You can manage from hundreds of VMs to hundreds of thousands, all from a single management console and with just one employee. This scale-out simplicity is only possible because the platform is built on Tintri’s Intelligent Infrastructure.
We have taken an approach to scaling that is similar to the way that VMware scales compute using Distributed Resource Scheduler (DRS). DRS is based on federated pools of compute nodes, and our architecture is based on federated pools of storage. You’re able to mix and match different kinds of storage nodes, all flash and hybrid nodes. And it’s futureproof—our platform accommodates all existing VMstores and future VMstores for total investment protection.
The architecture is designed for scalability, performance and availability. Today we support a maximum of 32 storage nodes. However the architecture is designed to support much larger configurations. There is no custom network interconnect between the nodes. There is also separation of control and data flow. Reads and writes do not need to be forwarded via an intermediary node.
VM scale-out software
Today’s announcement includes a new foundational piece of Tintri’s VMstore scale-out platform, which we call VM Scale-Out. The VM Scale-Out software uses a sophisticated algorithm to optimize the distribution of virtual machines. It constantly works in the background to identify the best placement for every VM.
The VM Scale-Out software makes placement recommendations based on actual data about your VMs’ behavior. In a large scale system, the algorithm crunches more than one million statistics every ten seconds, looking back 30 days to capture data peaks and not just averages. The software recommends placement based on the capacity and performance requirements of each VM —it even considers snapshots, clones, thin provisioning and storage activity to ensure it is moving the least amount of data in the shortest time. And when a VM is migrated to a new storage node, the data protection and QoS policies you’ve set for them will follow, along with snapshots and statistics.