IT Fights Back - Executive Viewpoint 2013 Prediction: Starboard Storage

By Kirill Malkin (Profile)
Share
Wednesday, January 23rd 2013
Advanced

2012 has definitely been the year of The Cloud. Everywhere you look, there is a cloud service or a server/storage system built for The Cloud and small medium enterprises are just one demographic that has been seduced by the allure of The Cloud. Why wouldn’t they be? Their internal storage systems are inefficient and costly to manage. With mounting wage bills, operational costs and an inability to keep up with data growth rates, having someone else manage your storage seems pretty attractive. If you believe the hype, the only thing stopping them from moving more to the cloud has been fears over data security.

As with all things though, there are cycles, and, in 2013, we are going to see the broad emergence of more efficient internal storage choices that make the cloud less attractive for mainstream data management. Sure, we will still use our cloud-based ERP and customer relationship management systems, but core IT will have much more cost competitive choices that will enable data management to remain in-house.

As some of us know already, over the past couple of years, the storage networking industry has been going through a quiet revolution. Scale-up storage array designs built on hard disks now offer petabytes of RAW capacity and are being augmented with terabytes of ultra-high performance solid-state tiers or caches. Proprietary single-processor hardware architectures are being replaced with x86-based multi-core number crunchers running at 3 GHz or more each. The dynamic RAM cache capacities have grown by at least a magnitude, now reaching tens and even hundreds of gigabytes.

Also, storage and network connectivity speeds have kept up with the pace – 6 Gb/s SAS & SATA disks are ubiquitous, 10 GbE is a de-facto networking standard, and so is 8 Gb/s Fibre Channel, with the next generations enhancements just around the corner.

With such massive improvements across the storage hardware technologies, the natural expectation is that the storage systems will follow the example of hypervisors and be able to provide shared storage services to a hefty mix of different applications. However, the problem with these legacy architectures is that they are still constrained by RAID group constructs and siloed SAN and NAS capabilities, resulting in capacity and resources remaining underutilized and being complex to scale and tune.

Enter the Freshman Class. Nature abhors a vacuum and, with traditional vendors struggling to adapt legacy architectures to the realities of new technology, new storage systems are emerging, providing the cost, efficiency and ease of management that customers need to compete effectively with The Cloud. The promise is that they can consolidate and predictably handle the mixed workloads generated by a given set of applications consistent with the requirements of each individual application. Broadly, these systems are known as hybrid storage systems because of the way they meld hard disks drives and SSD into a single dynamic system.

These new systems promise to track application context, allocate storage tiers, prioritize requests and route I/O, while striking a delicate balance between available resources and the just-in-time needs of each application. All storage and processing resources (HDDs, SSDs, NV/DRAM, CPUs, networking, etc.) are pooled and made available for allocation on-demand. That way, all applications will have access to all resources of the storage system. More importantly, by deploying dynamic disk pools that break the traditional complexities and inefficiencies of RAID group management, they deliver low-cost of capacity, and with SSDs used as acceleration tier, deliver high performance at a lower cost as well. In other words, these innovative systems provide a cost-effective internal storage cloud to IT departments.