NFS is file-level storage unlike FC and iSCSI which are block level. NFS is a file sharing protocol that allows several clients to connect at the same time. The file shares are known as exports. Although NFS version 4 is out in the market for years but vSphere still support NFS version 3 only. With the introduction of 10GbE link NFS has become more competitive in terms of providing performance compared to FC SANs. NFS is cheap and very easy to setup and it’s the ultimate choice for SMB’s and often a default choice for VDI’s.
Unlike FC and iSCSI, NFS entirely relies on networking stack and cannot take the advantage of MPIO tools such as SATP’s and PSP’s. Therefore IP-Based redundancy and routing should be used.
Two different methods can be used:
- Create two or more vSwitches, each with VMkernel interface. The uplink on each vSwitch connects to a separate physical redundant switch. The VMkernel interfaces and NFS interfaces are split across different subnets
- The Physical Switches that support Ether Channel, if you need to have your ESX/ESXi hosts access more than one storage controller from different pNICs, you need to setup multiple IP addresses in the storage controller and configure Link Aggregation Control Protocol(LACP) load balancing on your storage Array controller. The vSwitch Load balancing policy should be set to Route Based on IP Hash
- Mount the NFS mounts using IP address rather than host names
- Separate network devices to isolate the storage traffic
- A non-routable VLAN
- Optimize Ethernet Flow Control
- Use jumbo frames (Set the MTU to 9000)
- Enable RSTP or PortFast on the switch ports
- Use switches with sufficient port buffers
- Set the SCSI timeout in windows guest OS to 60 seconds