No Shortcuts. No Doubts. The Story Behind Quantum’s 4K Testing.

Published on Apr 20, 2017 in Quantum

This post was originally posted to the Quantum website.

I ncreasing demand for 4K content has film and television professionals looking closely at faster storage solutions. But too often storage providers’ 4K performance claims are confusing. Their tested 4K formats often lack specificity and their hardware configurations are ambiguous. This begs the question, “did they do testing at all or just guesstimates?” There are no shortcuts to getting performance metrics. Estimates can be done on paper, but they are just that—estimates.

Quantum’s StorNext® data management platform has been part of high-resolution film and video workflows for nearly two decades because it excels at providing fast, predictable bandwidth for multiple users sharing the same content. Some of the first major motion picture 4K workflows relied on StorNext. We knew that StorNext could provide industry-leading 4K performance—we just had to do the hard work to prove it.

No Shortcuts

Xcellis™ workflow storage solutions are extremely configurable, with capacity and performance array combinations for nearly every budget. That means there are a lot of possible combinations to test. We chose to test 14 different configurations. We also made sure the arrays we tested included the most popular drive form factors being used today; all-flash arrays with solid-state drives (SSDs), 2.5-inch small form factor (SFF) drives and 3.5-inch large form factor (LFF) drives.

What did we test? There are significant performance requirement differences for compressed and uncompressed 4K formats, ranging from 100 MB/s to nearly 2GB/s. We chose to test performance for six different 4K formats, three uncompressed formats and three compressed.

Each configuration was tested with an empty file system. However, as free space is consumed, storage performance decreases. A more accurate characterization of performance is when arrays are near peak capacity. For this, we used an automated test application to fill each configuration to 85% full (sometimes an overnight process), and then repeated each test.

How many tests is that?

14 configurations x 2 capacity states x 2 operations (read and write) x 6 formats = 336 tests

Let the Tests Begin!

StorNext-based solutions deliver superior performance for film and video workflows because StorNext is a tunable, parallel file system that works as fast as block-level storage. This is possible because StorNext features native client software for Linux, Mac OS, and Windows workstations. That means there are tunable parameters for the clients and the servers that coordinate simultaneous access to files. Prior to any large-scale testing like this, it is important to set a baseline by verifying that these parameters perform optimally with the latest hardware and networking technologies.

We began in November with clean systems, unboxing and racking over 1.7PB of storage.

Testing began a short time after and continued for a little over two months, with only a short break for the holidays. Quantum’s engineering, product management, application engineers, and technical marketing teams in four states worked together to collect data, analyze results, run down anomalies, optimize settings, and draft best practice guidelines.

Drive Form Factor Matters

One thing surprised us a bit. It may seem logical that you could ask the slower, 3.5-inch drives to handle the compressed streams because they require lower data rates. Conversely, the faster drives would be tasked to handle the uncompressed streams. What we found was that the number of streams has much more impact on performance than the data rate of any given stream. This is due to the latency associated with the drive heads navigating among the streams, regardless of the data rate.

So, workflows that require a large number of streams to be delivered to editing or visual effects (VFX) workstations will benefit from faster 2.5-inch HDDs and SSDs, regardless of the stream counts. The large capacity 3.5-inch drives have no trouble handling uncompressed formats (such as RGB DPX if the stream count is modest.

We knew there would be latency between streams, but it had a greater impact on performance than expected.

It’s All About the File System

Scale-out NAS solutions that are able to provide collaborative HD workflows may struggle to deliver the 4K stream counts we saw in our testing. The TCP/IP protocol in used in scale-out NAS systems lack the client-side controls cited above—critical for coordinating shared access to massive files. Further, IP file transfers require a significant percentage of client CPU cycles, constraining the power needed to run workload operations.

Finally, scale-out NAS systems do not use true parallel file systems. Requested files must be collected from different nodes to a primary node, and then reconstructed there before they are transferred to the client. In addition to fighting IP latency, the node-restricted architecture can restrict file transfers speeds required for multi-stream 4K workflows.

StorNext is a true parallel file system, with intelligence at the node and the clients. Blocks of a file are sent to the client from the node where they reside directly to the client, accounting for the tremendous performance results from our tests.

No Doubts

The overall results were impressive. Every Xcellis configuration supported at least one stream of sustained uncompressed 4K at 85% capacity. We compared our results to other storage vendors—and we led the pack every time we compared similarly configured arrays. StorNext with all-flash arrays delivered jaw-dropping stream counts. We’ll be showing off the performance of our all-flash array at the 2017 NAB Show in a few weeks at booth SL5810.

Performing these rigorous real-world tests took a lot of work, but we enjoyed it—especially when we saw the results. We know that 4K workflows will be new for many. We hope that removing some of the doubts will make the transition to 4K a lot easier for those who choose Quantum.

Click here to read the rest of this post.