Video Game Pipeline
Production pipelines must allow ultra fast reads from many clients in parallel. Each client performs continual reads to a dataset when building/testing software. Ingest points where new data is added to the system are limited, so write performance is less of a factor. In addition to the “fast read” portion, high capacity / low cost storage is required for online archive. Low administrative overhead is desired.
WARP 38000-H HybridMatrix unified storage system with a mix of SSDs and HDDs. SSDs are configured to accelerate reads, while high capacity drives yield a large pool for long term storage at low cost.
Depending on scale, the storage pool in a HybridMatrix appliance seamlessly combines disk with read-optimized SSDs to achieve up to 8 Gigabytes per second (~80Gbps) of low- latency reads on the entire active dataset, using a single 4u shelf. As shelves are added, performance increases linearly.
In video game production, it’s no surprise to find a variety of mixed data types crossing the pipeline. The performance of rich media content like video, audio, and graphics, must keep pace with code engines and project timeline goals. When it comes to storage infrastructure, game design companies face a tough decision between affordable “generic” storage, versus high-performance purpose-built systems. Often, a company will resort to buying name-brand OEM hardware that is neither affordable nor flexible enough for long-term use, nor fast enough to keep pace with advances in game design software and workflows.
Such was the case for a popular game developer in Washington State, USA. The company had several highly-acclaimed titles behind them, and was working to build sequels, but ran up against the limits of their existing OEM storage system serving as a “vault” for game binaries. The legacy storage had run its course, topping out at 6TB of usable capacity. The cost to add more capacity was prohibitively high, and would only provide marginal benefit. Furthermore, an increase in client systems pulling from the legacy storage meant that the system was at its performance limit.
To provide storage infrastructure that could meet current production needs as well as scale affordably as the company grew, WARP Mechanics analyzed their dataflow:
Legacy OEM System Characteristics
• 1TB FC “intense read” data
• 5.6TB CIFS online archive data
• 16U: 1x head + 4x expansion shelves
• SMB v1 is slow
• 1 writer per 150 readers
• 300MBps throughput max
Requirements for New System
• Scale from 40TB to 100TB+ capacity
• ~5% of capacity must support “super fast” reads
• ~95% of capacity is a “moderate speed” archive
• 1 client writes into the fast portion
• Hundreds of random readers to that area
• Must have affordable TCO
• Easily scalable, without complex “scale out” software
Capacity & Performance
While 50TB is not a large datastore by modern standards, achieving that capacity in a simple and scalable way was an important factor for the customer. Throwing together a white-box chassis and open source software would not provide vendor support, or the ability to scale beyond the limits of the initial set of hardware. Storage from OEMs would provide the capacity, but acquisition price and long- term TCO of legacy OEM storage is vastly higher compared to the mainstream storage market.
To address scalability, WARP Mechanics offers highly-dense storage that is scalable, fast, and affordable. A single 4U chassis can grow from 40TB to 240TB just by adding hard drives: no additional hardware or licenses required. Expanding further is a simply matter of connecting JBODs above or below the initial system: Without substantial changes, the system can grow to multiple petabytes. WARPware storage controllers manage the system. When more drives are added, a few simple commands fold them into the existing filesystem immediately.
In addition to scalability, most storage systems struggle to handle mixed workloads without complex performance tuning or exorbitant costs in hardware. Legacy systems have limited cache, making it impossible to cache a complete working data set. To work around this, OEMs can support SSD options, but these solutions inevitably carry more license fees. WARP Mechanics never charges extra to activate the hardware customers already paid for, which makes hybrid solutions finally affordable.
The WARP Mechanics family of unified appliance combines the best of open source software and enterprise-grade hardware in a hybrid storage solution that doesn’t break the bank. The game developer decided to start with the WARP 38000-H HybridMatrix, partially populated with:
• 12x 4TB SAS HDDs
• 5x 600GB 15k RPM Enterprise SAS HDDs
• 4x 500GB read-optimized SSDs
• Dual controllers, E5-2620 CPU + 64GB RAM • 4x 10Gb Ethernet ports (built-in)
At the factory, WARP Mechanics preconfigured the system with both a “bulk” and a “fast” storage pool.
The high-capacity NL-SAS drives were configured in a dual- parity RAID for increased reliability for the “bulk” online archive. The small number of spindles still met the customer’s moderate write performance requirement.
The “fast” ingest pool used high-speed HDDs as a small (2TB) RAID set, coupled with a 1:1 ratio of SSDs acting as read cache. Offloading the entire “intense read” activity onto SSDs allowed both reads and writes to go faster, with ~2 Gigabytes (~20Gbps) available for reads, roughly tripling performance over the legacy OEM solution. Backed by SAS HDDs, WARPware seamlessly allows low-latency writes without risking excessive wear to the SSDs and without administrative overhead needed by legacy hierarchical storage management (HSM) systems.
Overall, the solution met or exceeded all current requirements, and provided a seamless growth path for years to come.