Objective 3.3 – Create a vSphere 5 Physical Storage Design from an Existing Logical Design

This objective talks about Storage, the most important component of any Infrastructure. ESXi provides host-level storage virtualization, which logically abstracts the physical storage layer from virtual machines.

An ESXi virtual machine uses a virtual disk to store its operating system, program files, and other data associated with its activities. A virtual disk is a large physical file, or a set of files, that can be copied, moved, archived, and backed up as easily as any other file. You can configure virtual machines with multiple virtual disks.

Storage design is driven by 4 common factors:

Availability: Every component and connections to your storage device must have redundancy to ensure there are no SPOFs, these can be multiple PSU’s in the host, multiple HBA cards (incase of FC SAN), Multiple NIC Cards (incase of IP based Storage), multiple SP’s and so on…..

Capacity: This should be ascertained as part of Pre Virtualization excersize as well as should be done on an ongoing basis to understand how much is left and how many more VM’s can be accomodated

Performance: This can be measured with several metric such IOPS, Latency etc… Performance can be a key metric overall while hosting Tier 1 Application on the Virtual Platform

Cost: This is the major driving factor for any Virtualization projects, SAN is the most costliest (provides optimal performance), but with 10 GB link IP based storage (cheaper as they can use the existing network Infra) are soon catching up to be the mainstream storage choice

Which RAID Option to choose from? This will really depend on the requirement, one of my very good has written a blogpost explaining each RAID type in detail, let us brief about them a little:

RAID 0, stripes all the disks together with no parity or mirroring, this type of RAID is optimal for performance, but one disk failure will destroy the whole data

RAID 1, Mirroring without striping; Minimum of 2 disks required. The data is mirrored across drives, this is optimal for availability but the capacity is divided be half of the total disk capacity

RAID 5, the most common RAID type being used, Block striping with distributed parity. Apart from performance RAID5 is a very good option to maximize capacity

RAID 6, very much similar to RAID 5, but in this case it uses the equivalent of 2 disks and withstands 2 disk failures.

How large a datastore should be can be decided by either Predictive Scheme or Adaptive Scheme:

Predictive Scheme’s approach is to provide several small LUNs with different storage characteristics. It provides better control of the space, disk shares can be apportioned more appropriately, less contention on individual LUNs and so on….

Adaptive Scheme’s approach is to provision a large LUN, with writing cache enabled. Large LUNs are required if the size of your vmdk’s are going to be large, not too much management of smaller LUNs and so on…

With vSphere 5.0 the default block size is 1MB for VMFS datastores, also it’s important to create the datastores using VI Client.

Which Protocol to use, I have discussed about each Protocol in detail, SAN, iSCSI and NFS

I have written a blog post about Hardware Acceleration here

Read Frank Denneman’s article on HBA Queue depth, also read Duncan Epping’s article for DSNRO Settings

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s