In my first blog post, I recounted how Tintri VMstore saved both my team and greater company from the endless hassles of performance issues with conventional storage. In this post, I’d like to talk about how much time we regained. It’s a well-known truism in business that “time is money”, and running Tintri VMstores have saved us a bundle!
When we bought our first VMstore, the regional sales manager told us, “You’ll run out of space before you run out of performance.” We took this advice with a grain of salt, but it turned out to be true. That VMstore was happily humming along as fast as the day we bought it, even as it was starting to get full. So, we promptly ordered a second one. Once our second system arrived, I sent a more junior administrator down to the data center to rack it (such are the perks of seniority). My colleague’s remit was to cable it up and ensure it powered on; I would do the rest. Upon opening the box, however, he saw the Quick Start Guide and realized how simple it was. He not only cabled up the power and networking but performed the initial configuration steps, making it ready and available to serve data. When I was notified that the VMstore was ready to go, I double-checked his work and connected it to vCenter . . . all in less than an hour!
Those of you experienced with enterprise storage arrays may be shaking your heads in disbelief, while others without that experience might not see what the big deal is. So, let me go into experiences I’ve had with other storage arrays. Most typically, an enterprise storage array will arrive in multiple components: one or more head units, one or more drive shelves, and possibly one or more interconnect switches. Simply ensuring that the components of this array are racked and cabled up correctly is a task typically reserved for skilled professionals. It’s easy to get wrong and often difficult to fix once the array is in production. I’ve certainly seen it done “correctly” at the outset, only to discover that the configuration is not conducive to further expansion without re-cabling. By itself, that is usually a multi-hour process, sometimes spanning several days (or in one case, months) when all the correct parts aren’t shipped.
Once your freshly minted array is racked, cabled, and powered on, you then need to configure it. RAID groups typically need to be set up, storage protocols need to be configured (which may include Fibre Channel zoning and initiator configuration/registration), and finally LUNs or volumes need to be built and presented to the clients. Altogether, it’s not uncommon for the initial deployment of a new array to take anywhere from many hours to several days; it all depends on what sort of issues are encountered and which parts are missing. If using block storage, you will, of course, also need to have your storage fabric built out. It’s noteworthy that many people have made a very good living out of specializing in installing and configuring enterprise storage arrays, whereas with Tintri that role simply doesn’t exist. It’s no more difficult than connecting a new physical server to your existing Ethernet network.
Unfortunately, the Tintri VMstores we purchased didn’t make the need for block storage or dedicated file serving appliances go away entirely. We believed we needed block storage (although Tintri’s SQL Integrated Storage probably weaned us 90%), and we certainly needed the kind of robust file serving capabilities that are the main purview of a particular well-known storage brand. So, we duly deployed that brand to service our non-virtualization storage needs. As we added workload to those arrays, we continually needed to rebalance the load, breaking up volumes to distribute them across storage nodes and disk groups. We closely monitored CPU and disk utilization, as well as storage latency to ensure that our important workloads like production SQL databases and critical file volumes were receiving the resources that they needed.
Our Tintri VMstores just worked.
Keep in mind that much of our Tintri estate consisted of hybrid models running a mix of flash and SATA drives, whereas the “well-known storage brand” arrays were all-flash.
By putting our virtual machines on VMstores, we got back the bulk of time we would have spent on deployment, configuration, and performance tuning. That is time spent on solving other problems or completing other projects. It’s time spent building capabilities on top of the storage rather than managing it. Finally, it is time spent not worrying about the requirements that come with traditional storage arrays. As any sysadmin can tell you, “Time spent not stressing about your technology is the best time of all.”