Reading Time: 2 minutes

A Case For Storage Virtualization

Posted on

By Michael Keeler, Storage Architect, Evolving Solutions

The concept of storage virtualization has been around for a number of years, but until recently, has not gained widespread acceptance. As with most revolutionary ideas it takes time for companies to become comfortable with the idea of placing another device into the heart of the data path between their storage and their servers. The promises of virtualization are many, but so are the risks. Implementing an incorrect or inadequate solution would negatively impact the entire data infrastructure. So, the acceptance of virtualization has followed the same path as Storage Area Networks (SAN) which were discussed for years before the industry finally accepted them. Now it is hard to imagine a medium or large data infrastructure without a SAN. The same is becoming true with storage virtualization. It is logically the next step in storage management.

The benefits of storage virtualization are many; a single point of management for all storage, increased storage utilization, standardized copy services, ease of data migration between storage devices, and a common set of multipath drives and tools. It is also a key enabler for Information Life Management (ILM) strategies by assisting with data movement between storage tiers.

Storage Virtualization works by adding a management layer between the servers and the storage. From the servers perspective they see the virtualization engines as their storage device, while the storage sees the engines as their server. Once this layer is in place it becomes the primary management interface for communicating with both servers and storage. It is easy to group storage devices into tiers or by common usage, even storage devices from different vendors. Typically the entire capacity of the array is mapped to the engines in large increments which are placed into a storage pool. Virtual disks are then allocated out of these pools for assignment to a host server.

The presence of this layer shields servers and applications from changes to the storage environment. A storage device can easily be replaced with another unit and data copied in the background from one unit to the other without application downtime. The ability to move data at will means that lightly used or outdated data can be easily moved to less expensive storage devices.

Since the virtualization engines appear as the storage device to the servers, only the multipath drive associated with the engine manufacturer needs to be used. This reduces the management complexity and interoperability issues with having numerous multipath drivers.

Copy services is also managed at the virtualization layer which means that Point-In-Time (PIT) and disaster recovery data replication are only purchased at the virtualization layer and they have a common interface for management. It becomes easier to replicate data or create PIT scripts to copy data from one tier to another for backup and data recovery purposes.

There are two basic approaches to virtualization; a standalone appliance or as a blade within a SAN director. Both approaches have merit. The appliance has the advantage of scalability. To grow the virtualization environment you simply add more appliances which are managed as a single entity. Blades are potentially more cost effective since they already reside within the fabric and there is no need to connect external cabling to attach them to the fabric. Blades are protected by highly available, director class, hardware.

Like Storage Area Networks, implementation of storage virtualization has taken time to gain widespread acceptance, but its time has arrived. The cost benefits of having a single administrator, controlling hundreds of terabytes or petabytes of storage capacity, are just too substantial to ignore.