Home > General > What next for storage?

What next for storage?

December 11th, 2012

Storage vendors have been very inventive over the years, creating some very interesting technologies: unified controllers, wide striping, various snapshot techniques, capacity efficiencies, etc. etc. But where has the key innovations been recently? Is it enough to have fancy features and technology? Is this what customers are even asking for?

The primary use case I still see for vendors is virtualisation, but you can probably suggest that databases are still a key business requirement for most organisations, and whatever we’re told, databases are still physical. So is there any major benefit in having fancy technology at the back-end of the storage with virtualisation in front?

Scale-Out Storage

Scale-Out Storage

First lets look at the database, as this is a relatively easy one to address. You could suggest that tiering is a great technology for databases, most data is historic and not read actively, most intensive applications will be accessing a small data set. But this is assuming that an application is written correctly and you aren’t doing full table scans, this is also assuming that you are regularly access this relatively small working set and that its predictable. Another key element is the high write throughout of logs and temporary databases. Tiering may fix a few use cases for data tiering of databases data, but it makes me feel uncomfortable and its not predictable, but this doesn’t fix my write requirements which require something with good write performance.

Really, to make the best use of a database you still need to be manually controlling the data and the disk configuration. High performance flash technology for write workloads, and split the database files up so that the active working set can always be addressed from flash also. This will not only give much more predictable performance, it should in many circumstances be as efficient and as dynamic as any tiering technology, just without the downsides (of which there are a few).

Onto virtualisation. VMware have been doing a fantastic job of putting a lot of intelligence directly into the hypervisor and vCenter directly. Do I need hardware snapshots? I can use Avamar (or VMware Data Protection), Veeam, CommVault, or any other of a hundred backup solutions out there that integrate with VMware’s API and mean I don’t need intelligent storage. Tiering is slowly getting there, I agree storage vendors currently have the lead here but I imagine this won’t last long! Load balancing is already better done at the hypervisor layer, dynamically rebalancing data stores based on capacity or response times. Replication can even be handled fairly well at the hypervisor layer, and some startups have come out with this as their primary replication strategy! It’s also not exactly a secret that VMware are looking at significantly changing, and improving, the way they interact with disk. I’ve also no doubt that Microsoft will be closely behind VMware in this arena as Microsoft have always loved the shared nothing approach and not using a SAN if you can help it (not that anyone listens).

Flash Storage

Flash Storage

So what do yo really need from a storage vendor? I think there’s 2 key areas I’m going to be watching closely over the next few years. Cost effective flash strategy, one that is flexible and dynamic without compromising performance for capacity (as is often the case with tiering). A flash strategy is key for me as this is clearly where the market is heading, I also see customers demanding more performance and better response times, especially if the industry gets its wish and everyone starts doing data analytics. Secondly, scale-out architecture, proper scale-out architecture. By proper scale-out architecture I mean add more nodes and get linear increase in capacity and performance (granted any overheads for protection), and this needs to be suitable for all workloads, whether that’s a scale-out workload (lots of small transactions distributed across multiple hosts), or a scale-up workload (a really big database!). I don’t think anyone has truly cracked scale-out arrays yet, there’s definitely some interesting technology but the use cases are fairly specific. I also think that as this area is threatening to the traditional array there is some hesitance to promote it. Scale-out should also leverage commodity hardware wherever possible, this is what makes it so appealing, it also makes the virtualisation market very interesting as suddenly my storage array is a candidate for the hypervisor also! I think what people like Nutanix are doing is incredibly revolutionary at the moment and I know others will follow suite given time.

So is the enterprise storage array dead? No, because I still need a solid and robust foundation for my critical (maybe legacy) applications and databases. Until the major software vendors fully adopt a scale-out architecture of the application, I’m going to need some significant grunt to power the backend, and I’m going to need this to be robust and highly available! I think the mid-market storage is the area that is going to change significantly, I think we’ll see much more commodity based technology coming through and less critical requirements around fancy storage technologies. Space efficiency and/or cost effective capacity are becoming key factors, you can have all the fancy technology in the world, but if another vendor is 25% cheaper against functionally capacity, then I’ll find it difficult to justify the price.

This post was also inspired somewhat by Encrico Signoretti with a similar discussion: http://juku.it/en/articles/netapp-is-a-bore-and-its-not-the-only-one/

Enhanced by Zemanta

General , , , , , ,

Comments are closed.


This site is not affiliated or sponsored in anyway by NetApp or any other company mentioned within.
%d bloggers like this: