Data Tiering By Storage Location

Topics:  

Storage

  • E-mail this page
  • |  Print this page
  • <!–

  • ARTICLEURL
  • –>



Posted by George Crump

,December 20, 2011 12:34 PM

As we discussed in our last entry “Will Solid State Kill Tiering,” the need to move data between different types of storage will increase as memory-based storage becomes more prevalent. Very soon, tiering will not only mean moving data within the storage system itself, it will also mean moving data to different storage systems and even into the server hosting the application.

With mechanical hard drives, the bottleneck created by the storage networking infrastructure was not apparent because the latency of the drives themselves. Memory-based storage has no latency and therefore the bottleneck is exposed. This in large part explains the success of PCIe-based solid state storage devices. PCIe solid state devices should’ve faced an uphill battle as they went against the conventional wisdom of shared storage.

Instead, these components have seen wide adoption because of the cost effectiveness, simplicity to install, and raw performance. As we discussed in our article “What is Storage Class Memory,” vendors have been successful at positioning PCIe-based solid state storage devices as a second tier of memory instead of a faster tier of storage. This is because of its near zero-latency performance since it is only separated from the CPU by the PCIe channel.

There is also a storage opportunity with PCIe-based solid state storage. The problem with PCIe-based solid state as storage is that it does create a separate tier of storage, one that is not only different than the mechanical hard drive but one that also is in a different physical location than the shared storage system that typically houses the mechanical hard drive. Automated tiering and caching systems will be the answer to these problems as they become location aware.

Today, we already have separate caching solutions being deployed in servers, leveraging PCIe solid state in parallel with solid state in shared storage. This allows for extremely active data to be cached on solid state storage inside the server and off of the network. With these configurations, active “read” data is stored inside the server–which means less data needs to transfer back and forth across the storage network. Implementing this type of technology could be an alternative to upgrading to the next faster network.

The challenge with these systems is that there is no orchestration of any kind since the caching or automated tiering software is unaware of each other. If the server-based solid state storage is used as a read cache, then data safety should be high and performance should certainly improve. But it will not be optimal. In the future, there needs to be some coordination between the location of the two high-speed storage devices so that maximum performance can be achieved. Something we will explore in greater detail in our next entry.

Article source: http://feeds.informationweek.com/click.phdo?i=c1ef84ce477dde24c7f2117aada6bdd5

View full post on National Cyber Security