FC SAN

Fibre Channel SAN: A storage area network (SAN) is a specialized high-speed network that connects computer systems, or hostservers, to high performance storage subsystems. The SAN components include host bus adapters (HBAs) in the host servers, switches that help route storage traffic, cables, storage processors (SPs), and storage disk arrays. The FC Protocol encapsulates all the SCSI commands into FC frames, a lossyless transport and it is the default solution for most of the Enterprise SANs.

Storage Array Types:

Active-Active Storage type: Simultaneous access to LUNs through all storage ports.  All the paths are active at all times, unless a path fails

Active-Passive: A system in which one storage processor is actively providing access to a given LUN. If access through the active storage port fails, one of the passive storage processors can be activated by the servers accessing it.

Asymmetric Storage System: Supports Asymmetric Logical Unit Access (ALUA). ALUA-complaint storage systems provide different levels of access per port. ALUA allows hosts to determine the states of target ports and prioritize paths. The host uses some of the active paths as primary while others as secondary.

Read Duncan Epping’s article on AULA

Zoning: provides access control in the SAN topology. Zoning defines which HBAs can connect to which targets. When you configure a SAN by using zoning, the devices outside a zone are not visible to the devices inside the zone.

Let us look at some advantages:

  • It’s a high speed connection, until 10 GbE arrived
  • Low risk of oversubscription on the paths as FC is lossyless with dedicated paths
  • It really serves well to the low latency applications
  • With dedicated links to fibre optic, it more secure
  • FC is dedicated to storage traffic
  • FC frames do not have the TCP/IP overhead just like iSCSI and NFS

Multipathing:

In case of a failure of any element in the SAN network, such as an adapter, switch, or cable, ESXi can switch to another physical path, which does not use the failed component. This process of path switching to avoid failed components is known as path failover.

To manage storage multipathing, ESXi uses a collection of Storage APIs, also called the Pluggable Storage Architecture (PSA). The PSA is an open, modular framework that coordinates the simultaneous operation of multiple multipathing plug-ins (MPPs).

By default, ESXi provides an extensible multipathing module called the Native Multipathing Plug-In (NMP). VMware has split NMP into SATP and PSP, let us look at them more in detail:

SATP: The host identifies the type of array and associates the SATP based on the its make and model. The array’s details are checked against the /etc/vmware/esx.conf file, which lists all the HCL-Certified Storage Arrays. This dictates whether the array is classified as active/active or active/passive. It uses this information for each array and sets the pathing policy for each LUN

PSP: The native PSP has 3 types of pathing policies. These policies are selected automatically based on SATP, let us look at the policies in details:

Fixed: The host uses the designated preferred path, if it has been configured. Otherwise, it selects the first working path discovered at system boot time. If you want the host to use a particular preferred path, specify it manually. Fixed is the default policy for most active-active storage devices

Most Recently Used (MRU): The host selects the path that it used most recently. When the path becomes unavailable, the host selects an alternative path. The host does not revert back to the original path when that path becomes available again. There is no preferred path setting with the MRU policy. MRU is the default policy for most active-passive storage devices

Round Robin: The host uses an automatic path selection algorithm rotating through all active paths when connecting to active-passive arrays, or through all available paths when connecting to active-active arrays. RR is the default for a number of arrays and can be used with both active-active and active-passive arrays to implement load balancing across paths for different LUNs

Best Practices while using SAN:

  • Configure your system to have only one VMFS volume per LUN
  • Unless you are using diskless servers, do not set up the diagnostic partition on a SAN LUN
  • For multipathing to work properly, each LUN must present the same LUN ID number to all ESXi hosts
  • Make sure the storage device driver specifies a large enough queue. You can set the queue depth for the physical HBA during system setup
  • On virtual machines running Microsoft Windows, increase the value of the SCSI TimeoutValue parameter to 60. This increase allows Windows to better tolerate delayed I/O resulting from path failover
  • VMware recommends that you provision all LUNs to all ESXi HBAs at the same time. HBA failover works only if all HBAs see the same LUNs
  • When you use vCenter Server and vMotion or DRS, make sure that the LUNs for the virtual machines are provisioned to all ESXi hosts. This provides the most ability to move virtual machines
  • When you use vMotion or DRS with an active-passive SAN storage device, make sure that all ESXi systems have consistent paths to all storage processors. Not doing so can cause path thrashing when a vMotion migration occurs
  • Do not mix FC HBAs from different vendors in a single host. Having different models of the same HBA is supported, but a single LUN cannot be accessed through two different HBA types, only through the same type
  • Ensure that the firmware level on each HBA is the same. Set the timeout value for detecting a failover. To ensure optimal performance, do not change the default value
  • Make sure read/write caching is enabled.
  • Because of diverse workload, the RAID group containing the ESXi LUNs should not include LUNs used by other servers that are not running ESXi
  • Distribute the paths to the LUNs among all the SPs to provide optimal load balancing

FCoE

ESXi can use Fibre Channel over Ethernet (FCoE) adapters to access Fibre Channel storage. The FCoE protocol encapsulates Fibre Channel frames into Ethernet frames. As a result, your host does not need special Fibre Channel links to connect to Fibre Channel storage, but can use 10Gbit lossless Ethernet to deliver Fibre Channel traffic

FCoE requires Converged Network Adapters (Hardware Adapters) and Software FCoE Adapters, let us look at both in detail:

Hardware FCoE Adapter: Hardware Adapters completely offloaded specialized Converged Network Adapters (CNAs) that contain network and Fibre Channel functionalities on the same card. In the vSphere Client, the networking component appears as a standard network adapter (vmnic) and the Fibre Channel component as a FCoE adapter (vmhba)

Software FCoE: A software FCoE adapter uses the native FCoE protocol stack in ESXi for the protocol processing. The software FCoE adapter is used with a NIC that offers Data Center Bridging (DCB) and I/O offload capabilities.

Configuration Guidelines for Software FCoE:

  • On the ports that communicate with your ESXi host, disable the Spanning Tree Protocol (STP). Having the STP enabled might delay the FCoE Initialization Protocol (FIP) response at the switch and cause an all paths down (APD) condition
  • Turn on Priority-based Flow Control (PFC) and set it to AUTO
  • VMware recommends the firmware to be Cisco Nexus 5000: version 4.1(3)N2 or higher and Brocade FCoE switch: version 6.3.1 or higher

Network Adapter Best Practices:

  • Make sure that the latest microcode is installed on the FCoE network adapter
  • If the network adapter has multiple ports, when configuring networking, add each port to a separate vSwitch. This practice helps you to avoid an APD condition when a disruptive event, such as an MTU change, occurs
  • Do not move a network adapter port from one vSwitch to another when FCoE traffic is active. If you need to make this change, reboot your host afterwards

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s