Recommendations for iSCSI Protocol
Best Practices for Oracle ZFS Storage Appliance and VMware vSphere 5.x: Part 5
This article describes how to configure the iSCSI protocol for VMware vSphere 5.x with Oracle ZFS Storage Appliance.
by Anderson Souza
Published July 2013
This article is Part 5 of a seven-part series that provides best practices and recommendations for configuring VMware vSphere 5.x with Oracle ZFS Storage Appliance to reach optimal I/O performance and throughput. The best practices and recommendations highlight configuration and tuning options for Fibre Channel, NFS, and iSCSI protocols.
The series also includes recommendations for the correct design of network infrastructure for VMware cluster and multi-pool configurations, as well as the recommended data layout for virtual machines. In addition, the series demonstrates the use of VMware linked clone technology with Oracle ZFS Storage Appliance.
All the articles in this series can be found here:
Note: For a white paper on this topic, see the Sun NAS Storage Documentation page.
The Oracle ZFS Storage Appliance product line combines industry-leading Oracle integration, management simplicity, and performance with an innovative storage architecture and unparalleled ease of deployment and use. For more information, see the Oracle ZFS Storage Appliance Website and the resources listed in the "See Also" section at the end of this article.
Note: References to Sun ZFS Storage Appliance, Sun ZFS Storage 7000, and ZFS Storage Appliance all refer to the same family of Oracle ZFS Storage Appliances.
Best Practices and Recommendations
The following best practices and recommendations apply for VMware vSphere 5.x using the iSCSI protocol with Oracle ZFS Storage Appliance.
- On VMware ESXi5.x hosts, ensure that you have at least one dual-port 10GbE NIC working with 9000 MTU jumbo frame.
- Use at least two physical IP network switches.
- On the Oracle ZFS Storage Appliance side, ensure that you have at minimum a link aggregation of two or more 10GbE NICs attached with a physical IP network switch, configured and working with a port-channel group or even IPMP technologies.
- Ensure that your 10GbE IP network is properly configured and working with high availability and load balancing (without point of failure).
- Ensure that your physical IP switches or routers are not congested or saturated.
- Ensure that your iSCSI network provides adequate throughout as well as low latency between initiators and targets.
- Isolate the iSCSI traffic through different VLANs or even network segmentation. Also, work with a different VMware vSwitch for iSCSI traffic.
- To achieve best performance, as well as to load balance the I/O traffic between paths and failover, configure VMware iSCSI to work in port binding mode.
- Change the storage array type to
VMW_SATP_ALUA
and also change the path selection policy fromVMW_PSP_MRU
toVMW_PSP_RR
, and ensure that all physical NIC members of the port binding are balancing the I/O traffic.
Figure 1 shows the high-level architecture, suitable for a production environment, of two different iSCSI topologies working with Link Aggregation Control Protocol (LACP), port-channel, and IPMP configuration with VMware vSphere 5.x and Oracle ZFS Storage Appliance.
Figure 1. Oracle ZFS Storage Appliance and VMware vSphere 5.x iSCSI environments configuring iSCSI for vSphere 5.x
Configuring iSCSI in Port Binding Mode
The following steps show how to configure VMware vSphere 5 iSCSI in port binding mode with Oracle ZFS Storage Appliance.
- Create a new vSwitch with at least two VMkernel ports and two 10GbE interfaces, each one working with 9000 MTU (jumbo frame) and VMware port binding configuration.
The example in Figure 2 shows
iSCSI01
andiSCSI02
VMkernel ports, while the 10GbE interfaces arevmnic2
andvmnic3
.
Figure 2. VMware vSwitch configuration screen shown in VMware vSphere 5.x client - For each VMkernel port, enable the Override switch failover mode option, as seen in Figure 3. Ensure that only one 10GbE adapter is enabled per port group. Additional cards must be moved to Unused Adapters.
To perform this task, select the ESXi5.x host, and then select the Configuration tab, Networking, and then Properties for your iSCSI vSwitch. Select the iSCSI port group, click Edit, and then select the NIC Teaming tab. See Figure 3.
Figure 3. VMware iSCSI vSwitch NIC teaming and configuration screen shown in VMware vSphere 5.x client
The example in Figure 3 shows two 10GbE adapters and two different VMkernel ports. Both 10GbE adapters (
vmnic2
andvmnic3
) are being balanced across two different port groups, with the following configuration:- The
iSCSI01
port group has thevmnic2
adapter enabled and thevmnic3
adapter unused. - The
iSCSI02
port group has thevminc3
adapter enabled and thevmnic2
adapter unused.
Important: When working with a port binding configuration, each port group must have only one active adapter. All other adapters must be moved to Unused Adapters. Do not use standby mode. Refer to Figure 3.
- The
Once the port group configuration is ready, add the VMware iSCSI software using the following steps.
- Open a connection with your VMware vCenter server, select the ESXi5.x host, and select Configuration.
- Under the Hardware option, select Storage Adapter and then Add.
- Select Add Software iSCSI Adapter. Click OK.
A new iSCSI vHBA will be created, as shown in Figure 4.
Figure 4. iSCSI Software Adapter screen shown in VMware vSphere 5.x client - Under Hardware and Storage Adapter, select your new iSCSI vHBA, and then select Properties.
The iSCSI Initiator Properties screen will open.
- Select Configure and enter an iSCSI alias name for this vHBA. Click OK.
The example in Figure 5 shows the iSCSI alias name
ESXi5.x
. Choose an alias that best fits your environment. Also, take note of the IQN name of your ESXi5.x host, which in the example isiqn.1998-01.com.vmware:aie-4440d-5312c143
. This information will be required for registering a new iSCSI initiator on the Oracle ZFS Storage Appliance, as shown in Figure 5.
Figure 5. iSCSI Initiator Properties screen shown in VMware vSphere 5.x client - On the same screen, for binding the port groups to the software iSCSI adapter as well as active vmknic-based multipathing for iSCSI software, select the Network Configuration tab, click Add, and then select the
iSCSI01
andiSCSI02
port groups. Click OK.
Figure 6 shows the port binding details.
Figure 6. iSCSI Initiator Properties screen showing port binding details
- Create a new iSCSI target on the Oracle ZFS Storage Appliance. To perform this, log in to the Oracle ZFS Storage Appliance BUI, click Configuration, SAN, and then the iSCSI Targets option. Select the Target IQN Auto-assign option; enter an alias name that best fits your environment, select a network interface, and click OK.
The iSCSI target is created. The example in Figure 7 shows the interface
aggr1
, which is a link aggregation of two 10GbE interfaces.Figure 7. iSCSI target configuration shown in Oracle ZFS Storage Appliance BUI
- Select your new iSCSI target and move it to the iSCSI Target Groups, select Edit, and change the name. Click OK and Apply. Figure 8 shows the edit window for the iSCSI target.
Note: As previously mentioned, a best practice is to work with at least two 10GbE NICs in LACP mode per Oracle ZFS Storage Appliance controller. CHAP authentication is not being used in this example, so you do not need to enter CHAP information.
Figure 8. iSCSI target groups configuration shown in Oracle ZFS Storage Appliance BUI
- On the same screen, click Initiators, and then iSCSI Initiators to create a new iSCSI initiator. Enter the IQN initiator shown in Figure 5. In the example shown in Figure 9, the IQN initiator is
iqn.1998-01.com.vmware:aie-4440d-5312c143
. Enter an alias name and click OK.
Figure 9. iSCSI initiators configuration shown in the Oracle ZFS Storage Appliance BUI
- Now that the new iSCSI initiator has been created, select it and move it to iSCSI Initiator Groups. Select Edit and change the iSCSI initiator name. Click OK and Apply. Figure 10 shows the iSCSI initiator group edit window.
Figure 10. iSCSI initiator groups configuration shown in the Oracle ZFS Storage Appliance BUI
- Next, you will create a LUN to map to the target and initiator group you have just created. Click Shares, select your project, and create the LUN. Figure 11 shows the Create LUN dialog box where you can map the LUN to the target and initiator group.
Figure 11. iSCSI LUN provisioning shown in the Oracle ZFS Storage Appliance BUI
- On the VMware ESXi5.x host on which you have created the iSCSI configuration, open iSCSI Initiator Properties, select the Dynamic Discovery tab, and click Add.
- In the Add Send Target Server screen shown in Figure 12, add the iSCSI IP address of the 10GbE link aggregation interface for the Oracle ZFS Storage Appliance. Click OK and Close.
Figure 12. Adding an iSCSI server in the VMware vSphere 5.x client - Rescan the adapters to discover the new iSCSI LUN that you have just created.
After a rescan of the iSCSI HBA, the new LUN will be available to the ESXi5.x host as well as be connected to two active paths' members of the port binding configuration, as shown in Figure 13.
Figure 13. Overview of VMware vSphere 5.x iSCSI network configuration shown in VMware vSphere 5.x client - Ensure that the new iSCSI LUN is visible and also accessible by the ESXi5.x host. Also, validate that the multipath configuration is working properly by using the following commands.
- Open an SSH connection to the ESXi5.x host and run the
esxcfg-mpath -l
command, as shown in Listing 1, to list all LUNs attached to the ESXi5.x host. Identify the new iSCSI LUN(s).
# esxcfg-mpath -l iqn.1998-01.com.vmware:aie-4440d-5312c143-00023d000002, iqn.1986-03.com.sun:02:a458fee1-24a7-c28a-949a-9be995f3ea17,t, 2-naa.600144f0a9b12ec6000050b93e310002 Runtime Name: vmhba39:C1:T0:L0 Device: naa.600144f0a9b12ec6000050b93e310002 Device Display Name: SUN iSCSI Disk (naa.600144f0a9b12ec6000050b93e310002) Adapter: vmhba39 Channel: 1 Target: 0 LUN: 0 Adapter Identifier: iqn.1998-01.com.vmware:aie-4440d-5312c143 Target Identifier: 00023d000002,iqn.1986-03.com.sun:02:a458fee1-24a7-c28a-949a-9be995f3ea17,t,2 Plugin: NMP State: active Transport: iscsi Adapter Transport Details: iqn.1998-01.com.vmware:aie-4440d-5312c143 Target Transport Details: IQN=iqn.1986-03.com.sun:02:a458fee1-24a7-c28a-949a-9be995f3ea17 Alias= Session=00023d000002 PortalTag=2 iqn.1998-01.com.vmware:aie-4440d-5312c143-00023d000001, iqn.1986-03.com.sun:02:a458fee1-24a7-c28a-949a-9be995f3ea17,t, 2-naa.600144f0a9b12ec6000050b93e310002 Runtime Name: vmhba39:C0:T0:L0 Device: naa.600144f0a9b12ec6000050b93e310002 Device Display Name: SUN iSCSI Disk (naa.600144f0a9b12ec6000050b93e310002) Adapter: vmhba39 Channel: 0 Target: 0 LUN: 0 Adapter Identifier: iqn.1998-01.com.vmware:aie-4440d-5312c143 Target Identifier: 00023d000001,iqn.1986-03.com.sun:02:a458fee1-24a7-c28a-949a-9be995f3ea17,t,2 Plugin: NMP State: active Transport: iscsi Adapter Transport Details: iqn.1998-01.com.vmware:aie-4440d-5312c143 Target Transport Details: IQN=iqn.1986-03.com.sun:02:a458fee1-24a7-c28a-949a-9be995f3ea17 Alias= Session=00023d000001 PortalTag=2
Listing 1
Note: The command can be filtered by iSCSI only.
# esxcfg-mpath -l | grep -i iSCSI Device Display Name: SUN iSCSI Disk (naa.600144f0a9b12ec6000050b93e310002) Transport: iscsi Device Display Name: SUN iSCSI Disk (naa.600144f0a9b12ec6000050b93e310002) Transport: iscsi
- Once you have identified the right iSCSI LUN, run the following command to validate the multipath configuration.
# esxcfg-mpath -bd naa.600144f0a9b12ec6000050b93e310002 naa.600144f0a9b12ec6000050b93e310002 : SUN iSCSI Disk (naa.600144f0a9b12ec6000050b93e310002) vmhba39:C0:T0:L0 LUN:0 state:active iscsi Adapter: iqn.1998-01.com.vmware:aie-4440d-5312c143 Target: IQN=iqn.1986-03.com.sun:02:a458fee1-24a7-c28a-949a-9be995f3ea17 Alias= Session=00023d000001 PortalTag=2 vmhba39:C1:T0:L0 LUN:0 state:active iscsi Adapter: iqn.1998-01.com.vmware:aie-4440d-5312c143 Target: IQN=iqn.1986-03.com.sun:02:a458fee1-24a7-c28a-949a-9be995f3ea17 Alias= Session=00023d000002 PortalTag=2
- Open an SSH connection to the ESXi5.x host and run the
- Similar to the instructions provided for the Fibre Channel protocol and as part of the tuning options for iSCSI protocol, change the default storage array type as well as the path selection policy and round-robin I/O operation limit prior to putting the servers into production. Use the examples in substeps below to perform this change. For changing the round-robin I/O operation limit, use the ESXi commands shown in the following substeps. Identify all Oracle ZFS Storage Appliance iSCSI disks that will be utilized by your virtualized server.
- Identify the Oracle ZFS Storage Appliance iSCSI disks:
esxcli storage nmp device list | egrep -i "SUN iSCSI Disk" Device Display Name: SUN iSCSI Disk (naa.600144f0fe9845750000513f7c570001) Device Display Name: SUN iSCSI Disk (naa.600144f0fe9845750000513f9b580002)
- As shown in Listing 2, use
for
,egrep
, andawk
instructions as filters, to get information for the devices for which the path selection policy and round-robin I/O operation limit will be changed:
esxcli storage nmp device list | egrep -i "SUN iSCSI Disk" | awk '{ print $7 }' | cut -c 2-37 naa.600144f0fe9845750000513f7c570001 naa.600144f0fe9845750000513f9b580002 for a in `esxcli storage nmp device list | egrep -i "SUN iSCSI Disk" | awk '{ print $7 }' | cut -c 2-37` do esxcli storage nmp psp roundrobin deviceconfig get -d $a done
Listing 2
- Change the path selection policy of iSCSI disks only.
IMPORTANT: The recommended VMware Storage Array Type Plug-in (SATP) policy for iSCSI protocol with Oracle ZFS Storage Appliance is VMW_SATP_DEFAULT_AA. The following esxcli command presents the correct options for changing the path selection policy, and the storage array type plugin for the iSCSI LUNs attached to an ESXi host. This esxcli command will change the SATP policy of all of the iSCSI LUNs attached to the ESXi host. This change will only take effect after a reboot of the ESXi host.
esxcli storage nmp satp rule add --transport=iscsi --satp=VMW_SATP_DEFAULT_AA --psp=VMW_PSP_RR
- Change the I/O operation limit and limit type of policy of iSCSI disks only:
for a in `esxcli storage nmp device list | egrep -i "SUN iSCSI Disk" | awk '{ print $7 }' | cut -c 2-37` do esxcli storage nmp psp roundrobin deviceconfig set -d $a -I 1 -t iops done
- Run the commands shown in Listing 3 to ensure that the new values for operation limit and the round-robin path switching have been updated:
for a in `esxcli storage nmp device list | egrep -i "SUN iSCSI Disk" | awk '{ print $7 }' | cut -c 2-37` do esxcli storage nmp psp roundrobin deviceconfig get -d $a done Device: naa.600144f0fe9845750000513f9b580002 IOOperation Limit: 1 Limit Type: Iops Use Active Unoptimized Paths: false
Listing 3
- Identify the Oracle ZFS Storage Appliance iSCSI disks:
- Alter the iSCSI software parameters listed in the following table.
Table 1. iSCSI Software Parameters
iSCSI Advanced Settings Option | Value |
---|---|
MaxOutstandingR2T | 8 |
FirstBurstLength | 16777215 |
MaxBurstLength | 16777215 |
MaxRecvDataSegLen | 16777215 |
To perform this task, right-click your iSCSI interface, and then click Properties and Advanced, as shown in Figure 14:
Figure 14. Changing iSCSI parameters in Advanced Settings
See Also
Refer to the following websites for further information on testing results for Oracle ZFS Storage Appliance:
- Oracle ZFS Storage Appliance website: http://www.oracle.com/storage/nas/
- Sun ZFS Storage Appliance Administration Guide: http://download.oracle.com/docs/cd/E22471_01/index.html
- Sun ZFS Storage 7000 Analytics Guide: http://docs.oracle.com/cd/E26765_01/pdf/E26398.pdf
- Sun ZFS Storage 7x20 Appliance Installation Guide: http://docs.oracle.com/cd/E26765_01/pdf/E26396.pdf
- Sun ZFS Storage 7x20 Appliance Customer Service Manual: http://docs.oracle.com/cd/E26765_01/pdf/E26399.pdf
- VMware website: http://www.vmware.com
- VMware vSphere 5.1 documentation: http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html
- VMware Knowledge Base: "Multipathing policies in ESX/ESXi 4.x and ESXi 5.x": http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1011340
- VMware Knowledge Base: "Changing the queue depth for QLogic and Emulex HBAs": http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1267
- Cisco: http://www.cisco.com
About the Author
Anderson Souza is a virtualization senior software engineer in Oracle's Application Integration Engineering group. He joined Oracle in 2012, bringing more than 14 years of technology industry, systems engineering, and virtualization expertise. Anderson has a Bachelor of Science in Computer Networking, a master's degree in Telecommunication Systems/Network Engineering, and also an MBA with a concentration in project management.