• Setting Up a Unix Host to Use iSCSI Storage

    PDF

    Setting Up a Unix Host to Use iSCSI Storage

    Requirements for setting up a host

    These system and network requirements must be met before setting up a host to use Unity storage.

    Before you can set up a host to use Unity storage, the following storage system and network requirements must be met.

    Network requirements

    For a host to connect to LUNs on an iSCSI interface, the host must be in the same network environment with the iSCSI interface. To achieve best performance, the host should be on a local subnet with each iSCSI interface that provides storage for it. In a multi-path environment, each physical interface must have two IP addresses assigned; one on each SP. The interfaces should be on separate subnets.

    Note:  The Linux iSCSI driver, which is part of the Linux operating system and which you configure so that the host iSCSI initiators can access the iSCSI storage, does not distinguish between NICs on the same subnet. As a result, to achieve load balancing, an iSCSI interface connected to a Linux host must have each NIC configured on a different subnet.

    To achieve maximum throughput, connect the iSCSI interface and the hosts for which it provides storage to their own private network. That is, a network just for them. When choosing the network, consider network performance.

    Path management network requirements

    Note:  Path management software is not supported for a Windows 7 or Mac OS host connected to a Unity system.

    When implementing a highly-available network between a host and your system, keep in mind that:

    • A LUN is visible to both SPs
    • You can configure up to 8 IPs per physical interface. If more than one interface is configured on a physical interface, each interface must be configured on a separate VLAN.
    • Network switches may be on separate subnets.
    Note:  Directly attaching a host to a Unity system is supported if the host connects to both SPs and has the required multipath software.

    The following figure shows a highly-available iSCSI network configuration for hosts accessing a storage resource (iSCSI LUNs). Switch A and Switch B are on separate subnets. Host A and Host B can each access the storage resource through separate NICs. If the storage resource is owned by SP A, the hosts can access the storage resource through the paths to the interfaces on SP A. Should SP A fail, the system transfers ownership of the resource to SP B and the hosts can access the storage resource through the paths to the interfaces on SP B.

    Figure 1. Highly-available iSCSI network sample
    Highly-available iSCSI network sample

    Storage system requirements

    • Install and configure the system using the Initial Configuration wizard.
    • Use Unisphere or the CLI to configure NAS servers or interfaces, or iSCSI LUNs, on the storage system.
    Note:  On an HP-UX host, the iSCSI initiator will not discover the iSCSI storage if it does not detect a LUN from the storage system assigned to host LUN ID 0. We recommend that you create a unique target, create a LUN on this interface, and give it access to the HP-UX host. The first LUN that you assign to a host is automatically assigned host LUN ID 0.

    Using multi-path management software on the host

    Multi-path management software manages the connections (paths) between the host and the storage system should one of the paths fail. The following types of multi-path managements software are available for a host connected to a storage system:

    • EMC PowerPath software on an HP-UX, Linux, or Solaris host
    • Native mulitpath software on a Citrix XenServer, HP-UX 11i, Linux, or Solaris host

    For compatibility and interoperability information, refer to the Unity Support Matrix on the support website.

    Setting up your system for multi-path management software

    For your system to operate with hosts running multi-path management software, two iSCSI IPs are required. These IPs should be on separate physical interfaces on separate SPs.

    Verify the configuration in Unisphere. For details on how to configure iSCSI interfaces, refer to topics about iSCSI interfaces in the Unisphere online help.

    Note:  For highest availability, use two network interfaces on the iSCSI interface. The network interfaces should be on separate subnets. You can view the network interfaces for an iSCSI interface within Unisphere.

    Installing PowerPath

    Procedure
    1. On the host or virtual machine, download the latest PowerPath version from the PowerPath software downloads section on the Online Support website.
    2. Install PowerPath as described in the appropriate PowerPath installation and administration guide for the host’s or virtual machine’s operating system.
      This guide is available on Online Support. If the host or virtual machine is running the most recent version and a patch exists for this version, install it, as described in the readme file that accompanies the patch.
    3. When the installation is complete, reboot the host or virtual machine.
    4. When the host or virtual machine is back up, verify that the PowerPath service has started.

    Installing native multipath software

    Whether you need to install multipath software, depends on the host’s operating system.

    Citrix XenServer

    By default XenServer uses the Linux native multipathing (DM-MP) as it multipath handler. This handler is packaged with the Citrix XenServer operating system software.

    Linux

    To use Linux native multipath software, you must install the Linux multipath tools package as described in Installing or updating the Linux multipath tools package.

    HP-UX 11i

    Native multipath failover is packaged with the HP-UX operating system software.

    Solaris

    Sun’s native path management software is Sun StorEdge™ Traffic Manager (STMS).

    For Solaris 10 — STMS is integrated into the Solaris operating system patches you install. For information on install patches, refer to the Sun website.

    Installing or updating the Linux multipath tools package

    To use Linux native multipath failover software, the Linux multipath tools package must be installed on the host. This package is installed by default on SuSE SLES 10 or higher, but is not installed by default on Red Hat.

    If you need to install the multipath tools package, install the package from the appropriate website below.

    For SuSE:

    http://www.novell.com/linux/

    The multipath tools package is included with SuSE SLES 9 SP3 and you can install it with YaST or RPM.

    For Red Hat:

    http://www.redhat.com

    The multipath tools package is included with Red Hat RHEL4 U3 or RHEL5, and you can install it with YaST or Package Manager. If an update is available, follow the instructions for installing it on the http://www.redhat.com website.

    What's next?

    Do one of the following:

    AIX host — Setting up for iSCSI storage

    Install AIX software

    Procedure
    1. Log in to the AIX host using an account with administrator privileges.
    2. Download the AIX ODM Definitions software package to the /tmp directory on the AIX host as follows:
      1. Navigate to AIX ODM Definitions on the software downloads section on the Support tab of the Online Support website.
      2. Choose the version of the EMC ODM Definitions for the version of AIX software running on the host, and save the software to the /tmp directory on the host.
    3. Start the System Management Interface Tool to install the software:
                                    smit installp
                                  
    4. In the /tmp directory, uncompress and untar the EMC AIX fileset for the AIX version running on the host:
                                    uncompress EMC.AIX.x.x.x.x.tar.z
           			tar -xvf EMC.AIX.x.x.x.x.tar
                                  
    5. In the Install and Update Software menu, select Install and Update from ALL Available Software and enter /tmp as the path to the software.
    6. Select SOFTWARE to install.
    7. After making any changes to the displayed values, press Enter.
    8. Scroll to the bottom of the window to see the Installation Summary, and verify that the message “SUCCESS” appears.
    9. Reboot the AIX host to have the changes take effect.

    Configure the AIX iSCSI initiator

    Enable the AIX host to discover iSCSI targets on the storage system:
    Procedure
    1. On the storage system, from the iSCSI Interfaces page in Unisphere (Storage > Block > iSCSI Interfaces), determine the IQN and the IP address of the storage system iSCSI interface (target) to which you want the host initiator to connect.
    2. On the AIX host, start the System Management Interface Tool:
                                    smit
                                  
    3. Using a text editor, open the file /etc/iscsi/targets.
    4. For each iSCSI interface to be accessed by this initiator, add a line in the format:
                                    {portal} {port} {target_iqn}
                                  

      where:

      • {portal} = IP address of the network portal
      • {port} = number of the TCP listening port (default is 3260)
      • {target_iqn} = formal iSCSI name of the target

    Configure LUNs as AIX disk drives

    Install the ODM (Object Data Manager) kit on the AIX host:
    Procedure
    1. Remove any drives that are identified as "Other FC SCSI Disk Drive" by the system by running the following command.
                                    lsdev -Cc disk | grep “Other FC SCSI Disk Drive” | awk {‘print $1’} | xargs -n1 rmdev -dl
                                  
    2. When applicable, uninstall any existing CLARiiON ODM file sets.
                                    installp -u EMC.CLARiiON.*
                                  
    3. Use the following commands to download the AIX ODM package version 5.3.x or 6.0.x from the FTP server at ftp.emc.com.
      Note:  IBM AIX Native MPIO for Unity requires a different ODM package. Contact your service provider for more information.
      1. Access the FTP server by issuing the following command:
                                          ftp ftp.emc.com
                                        
      2. Log in with a user name of anonymous and use your email address as a password.
      3. Access the directory that contains the ODM files:
                                          cd /pub/elab/aix/ODM_DEFINITIONS
                                        
      4. Download the ODM package
                                          get EMC.AIX.5.3.x.x.tar.Z
                                        

        or

                                          get EMC.AIX.6.0.x.x.tar.Z
                                        
    4. Prepare the files for installation.
      1. Move the ODM package into the user install directory.
                                          cd /usr/sys/inst.images
                                        
      2. Uncompress the files.
                                          uncompress EMC.AIX.5.3.x.x.tar.Z
                                        
        or
                                          uncompress EMC.AIX.6.0.x.x.tar.Z
                                        
      3. Open, or untar, the files.
                                          tar -xvf EMC.AIX.5.3.x.x.tar
                                        
        or
                                          tar -xvf EMC.AIX.6.0.x.x.tar
                                        
      4. Create or update the TOC file.
                                          inutoc
                                        
    5. Install the files.
      • PowerPath:
                                        installp -ac -gX -d . EMC.CLARiiON.aix.rte
        installp -ac -gX -d . EMC.CLARiiON.fcp.rte
        
                                      
      • MPIO:
                                        installp -ac -gX -d . EMC.CLARiiON.aix.rte
        installp -ac -gX -d . EMC.CLARiiON.fcp.MPIO.rte
        
                                      
      Note:  You can also install the files using the AIX smitty command.

    Prepare the LUNs to receive data

    If you do not want to use a LUN as a raw disk or raw volume, then before AIX can send data to the LUN, you must either partition the LUN or create a database file systems on it. For information on how to perform these tasks, refer to the AIX operating system documentation.

    Citrix XenServer host — Setting up for iSCSI storage

    To set up a Citrix XenServer host to use iSCSI storage, perform these tasks:

    1. Configure the iSCSI software initiator
    2. Configure the iSCSI software initiator for multipathing

    Configure the iSCSI software initiator

    The XenServer operating system include iSCSI software that you must configure for each initiator that will connect to the iSCSI storage system.
    Procedure
    1. On the storage system, from the iSCSI Interfaces page in Unisphere (Storage > Block > iSCSI Interfaces), determine the IP address of the system interface (target) to which you want the host initiator to connect.
    2. Open the XenCenter console.
    3. Click New Storage at the top of the console.
    4. In the New Storage dialog box, under Virtual disk storage, select iSCSI.
    5. Under Name, enter a descriptive name of the virtual disk (Storage Repository).
    6. To use optional CHAP
      1. Check Use CHAP.
      2. Enter the CHAP username and password.
    7. Click Discover IQNs.
    8. Click Discover LUNs.
    9. Once the IQN and LUN fields are populated, click Finish.
      The host scans the target to see if it has any XenServer Storage Repositories (SRs) on it already, and if any exist you are asked if you want to attach to an existing SR or create a new SR.

    Configure the iSCSI software initiator for multipathing

    Citrix recommends either enabling multipathing in XenCenter before you connect the pool to the storage device or if you already created the storage repository, putting the host into Maintenance Mode before you enable multipathing.

    If you enable multipathing while connected to a storage repository, XenServer may not configure multipathing successfully. If you already created the storage repository and want to configure multipathing, put all hosts in the pool into Maintenance Mode before configuring multipathing and then configure multipathing on all hosts in the pool. This ensures that any running virtual machines that have LUNs in the affected storage repository are migrated before the changes are made.

    Procedure
    1. In XenCenter enable the multipath handler:
      1. On the host’s Properties dialog box, select the Multipathing tab.
      2. On the Multipathing tab, select Enable multipathing on this server.
    2. Verify that multipathing is enabled by clicking the storage resource’s Storage general properties.

    HP-UX host — Setting up for iSCSI storage

    Download and install the HP-UX iSCSI initiator software

    Procedure
    1. On the HP-UX host, open a web browser and download the iSCSI initiator software from the HP-UX website.
    2. Install the initiator software using the information on the site or that you downloaded from the site.

    Configure HP-UX access to an iSCSI interface (target)

    Before an HP-UX iSCSI initiator can send data to or received data from iSCSI LUNs, you must configure the network parameters for the NIC initiators so that they can connect to the iSCSI interface (target) with the iSCSI LUNs.

    To configure access to an iSCSI interface:

    Procedure
    1. Log into the HP-UX host as superuser (root).
    2. Add the path for the iscsi util and other iSCSI executables to the root path:
                                    PATH=$PATH:/opt/iscsi/bin
                                  
    3. Verify the iSCSI initiator name:
                                    iscsiutil -1
                                  

      The iSCSI software initiator configures a default initiator name in an iSCSI Qualified Name (IQN) format.

      For example:

                                    iqn.1986-03.com.hp:hpfcs214.2000853943
                                  

      To change the default iSCSI initiator name or reconfigure the name to an IEEE EUI-64 (EUI) format, continue to the next step; otherwise skip to step 5.

    4. Configure the default iSCSI initiator name:
                                    iscsiutil [iscsi-device-file] -i -N iscsi-initiator-name
                                  
      Note:  For mor information on IQN and EUI formats, refer to the HP-UX iscsi software initiator guide.

      where:

      • iscsi-device-file is the iSCSI device path, /dev/iscsi, and is optional if you include the -i or -N switches in the command.
      • -i configures the iSCSI initiator information.
      • -N is the initiator name. When preceded by the -i switch, it requires the iSCSI initiator name. The first 256 characters of the name string are stored in the iSCSI persistent information.
      • iscsi-initiator-name is the initiator name you have chosen, in IQN or EUI format.
    5. Verify the new iSCSI initiator name:
                                    iscsiutil -1
                                  
    6. For each iSCSI target device you will statically identity, store the target device information in the kernel registry, adding one or more discovery targets:
                                    iscsitutil [/dev/iscsi] -a -I ip-address/hostname [-P tcp-port] [-M portal-grp-tag]
      
                                  

      where

      • -a adds a discovery target address into iSCSI persistent information. You can add discovery target addresses only with this option.
      • -I requires the IP address or hostname of the discovery target address.
      • ip-address/hostname is the IP address or host name component of the target network portal.
      • -P tcp-port is the listening TCP port component of the discovery target network portal (optional). The default iSCSI TCP port number is 3260.
      • -M portal-grp-tag is the target portal group tag (optional). The default target portal group tag for discovery targets is 1.

      For example:

                                    iscsiutil -a -I 192.1.1.110
                                  

      or, if you specify the hostname,

                                    iscsiutil -a -I target.hp.com
                                  

      If an iSCSI TCP port used by the discovery target is different than the default iSCSI port of 3260, you must specify the default TCP port used by the discovery target, for example,

                                    iscsiutil -a -I 192.1.1.110 -P 5001
                                  

      or

                                    iscsiutil -a -I target.hp.com -P 5001
                                  
    7. Verify the discovery targets that you have configured:
                                    iscsiutil -p -D
                                  
    8. To discover the operational target devices:
                                    /usr/sbin/ioscan -H 225
      ioscan -NfC disk (for HP-UX 11i v3 only)
                                  
    9. To create the device files for the targets:
                                    /usr/sbin/insf -H 225
                                  
    10. To display operational targets:
                                    iscsiutil -p -O
                                  

    Make the storage processors available to the host

    Verify that each NIC sees only the storage processors (targets) to which it is connected:

                            ioscan -fnC disk
    insf -e
                          

    ioscan -NfC disk (for HP-UX 11i v3 only)

    Verify that native multipath failover sees all paths to the LUNs

    If you are using multipath failover:
    Procedure
    1. Rescan for the LUNs:
                                    ioscan -NfC disk|
      insf -e
                                  
    2. View the LUNs available to the host:
                                    ioscan -NfnC disk
                                  
    3. Verify that all paths to the storage system are CLAIMED:
                                    ioscan -NkfnC lunpath
                                  

    Prepare the LUNs to receive data

    If you do not want to use a LUN as a raw disk or raw volume, then before HP-UX can send data to the LUN, perform the following tasks as described in the HP-UX operating system documentation:
    Procedure
    1. Make the LUN visible to HP-UX.
    2. Create a volume group on the LUN.

    Linux host — Setting up for iSCSI storage

    To set up a Linux host to use iSCSI storage, perform these tasks:

    1. Configure Linux iSCSI initiator software
    2. Set up the Linux host to use the LUN

    Configure Linux iSCSI initiator software

    The Linux operating system includes the iSCSI initiator software — the iSCSI driver open-iscsi — that comes with the Linux kernel. You must configure this open-iscsi driver with the network parameters for each initiator that will connect to your iSCSI storage system.
    Note:  The Linux iSCSI driver gives the same name to all network interface cards (NICs) in a host. This name identifies the host, not the individual NICs. This means that if multiple NICs from the same host are connected to an iSCSI interface on the same subnet, then only one NIC is actually used. The other NICs are in standby mode. The host uses one of the other NICs only if the first NIC fails.

    Each host connected to an iSCSI storage system must have a unique iSCSI initiator name for its initiators (NICs). To determine a host’s iSCSI initiator name for its NICs use cat /etc/iscsi/initiatorname.iscsi for open-iscsi drivers. If multiple hosts connected to the iSCSI interface have the same iSCSI initiator name, contact your Linux provider for help with making the names unique.

    To configure the Linux open-iscsi driver:

    Note:  The EMC Host Connectivity Guide for Linux on the EMC Online Support website provides the latest information about configuring the open-iscsi driver.
    Procedure
    1. On the storage system, from the iSCSI Interfaces page in Unisphere (Storage > Block > iSCSI Interfaces), determine the IP address of the storage system iSCSI interface (target) to which you want the host initiators to connect.
    2. For any Linux initiators connected to the iSCSI interface with CHAP authentication enabled, stop the iSCSI service on the Linux host.
    3. Using a text editor, such as vi, open the /etc/iscsi/iscsi.conf file.
    4. Uncomment (remove the # symbol) before the recommended variable settings in the iSCSI driver configuration file as listed in the table below:
      Table 1. Open-iscsi driver recommended settings
      Variable name
      Default setting
      Recommended setting
      node.startup
      manual
      auto
      node.session.iscsi.InitialR2T
      No
      Yes
      node.session.iscsi.ImmediateData
      Yes
      No
      node.session.timeo.replacment_timeout
      120
      120
      Note:  In congested networks you may increase this value to 600. However, this time must be greater than the combined node.conn[0].timeo.timeo.noop_out_interval and node.conn[0].timeo.timeo.noop_out_time times.
      node.conn[0].timeo.timeo.noop_out_interval
      10
      later in congested networks This value should not exceed the values in node.session.timeeo.replacement_timeout.
      node.conn[0].timeo.timeo.noop_out_timeout
      15
    5. To start the iSCSI service automatically on reboot and powerup, set the run level to 345 for the iSCSI service.
    6. Discover and log in to the host to which you want to connect with the iscsiadm command for Red Hat 5 or later or YaST for SuSE 10 or later.
      You need to perform a discovery on only a single IP address because the storage system also returns its other iSCSI target, if it is configured for a second iSCSI interface.
    7. Configure optional CHAP authentication on the open-iscsi driver initiator:
      For Red Hat 5 or later

      Use the iscsiadm command to do the following:

      For optional initiator CHAP:

      1. Enable CHAP as the authentication method.
      2. Set the username for the initiator to the initiator’s IQN, which you can find with the iscsiadm -m node command.
      3. Set the secret (password) for the initiator to the same secret that you entered for the host initiator on the storage system.

      For optional mutual CHAP

      1. Set the username (username_in) to the initiator’s IQN, which you can find with the iscsiadm -m node command.
      2. Set the secret (password_in) for the target to the same secret that you entered for the iSCSI interface.
      For SuSE 10 or later

      Use the YaST to do the following for the open-iscsi driver initiator:

      For optional initiator CHAP:

      1. Enable incoming authentication.
      2. Set the initiator CHAP username to the initiator’s IQN, which you can find with the iscsiadm -m node command.
      3. Set the initiator CHAP password (secret) to the same secret that you entered for the host initiator on the storage system.

      For mutual CHAP:

      1. Enable outgoing authentication (mutual CHAP).
      2. Set the mutual CHAP username to the initiator’s IQN, which you can find with the iscsiadm -m node command.
      3. Set the initiator password (secret) for the target to the same secret that you entered for the iSCSI interface.
    8. Find the driver parameter models you want to use, and configure them as shown in the examples in the configuration file.
    9. Restart the iSCSI service.

    Set up the Linux host to use the LUN

    Perform the following tasks as described in the Linux operating system documentation:
    Procedure
    1. Find the LUN ID:
      1. In Unisphere, select Storage > Block > LUNs.
      2. On the LUN, select Edit.
      3. On the Properties window, select Access > Access details to determine the LUN ID.
    2. On the host, partition the LUN.
    3. Create a file system on the partition.
    4. Create a mount directory for the file system.
    5. Mount the file system.
    Results
    The Linux host can now write data to and read data from the file system on the LUN.

    Solaris host — Setting up for iSCSI storage

    Configure Sun StorEdge Traffic Manager (STMS)

    If you plan to use STMS on the host to manage the paths to the LUNs, you must first configure it:
    Procedure
    1. Enable STMS by editing the following configuration file:
      Solaris 10 — Do one of the following:
      • Edit the /kernel/drv/fp.conf file by changing the mpxio-disable option from yes to no.

        or

      • Execute the following command:
                                          stmsboot -e
                                        
    2. We recommend that you enable the STMS auto-restore feature to restore LUNs to their default SP after a failure has been repaired. In Solaris 10, auto-restore is enabled by default.
    3. If you want to install STMS offline over NFS, share the root file system of the target host in a way that allows root access over NFS to the installing host, if you want to install STMS offline over NFS. You can use a command such as the following on target_host to share the root file system on target_host so that installer_host has root access:
                                    share -F nfs -d ‘root on target_host‘ -o ro,rw=installer host,root=installer_host /
                                  
      If the base directory of the package (the default is /opt) is not part of the root file system, it also needs to be shared with root access.
    4. For the best performance and failover protection, we recommend that you set the load balancing policy to round robin:
                                    setting load-balance=”round-robin”
                                  

    Configure Solaris access to an iSCSI interface (target)

    Before a Solaris iSCSI initiator can send or receive data to the iSCSI LUNs, you must configure the network parameters for the NIC initiators so that they can connect to the iSCSI interface (target) with the iSCSI LUNs.

    To configure access to an iSCSI interface:

    Procedure
    1. Log into the Solaris system as superuser (root).
    2. Configure the target device to be discovered using SendTargets dynamic discovery.
      Example:
                                    iscsiadm modify discovery-address 10.14.111.222:3260
                                  
      Note:  If you do not want the host to see specific targets, use the static discovery method as described in the Solaris server documentation.
    3. Enable the SendTargets discovery method.
      Examples:
                                    iscsiadm modify discovery --sendtargets enable
                                  

      or

                                    iscsiadm modify discovery -t enable
                                  
    4. Create the iSCSI device links for the local system.
      For example:
                                    devfsadm -i iscsi
                                  
    5. If you want Solaris to login to the target more than once (multiple paths), use:
                                    iscsiadm modify target-param -c <logins> <target_iqn>
                                  

      where logins is the number of logins and target_iqn is the IQN of the iSCSI interface (target).

      Note:  You can determine the IQN of the iSCSI interface from Unisphere on the iSCSI Interfaces page ( Storage > Block > iSCSI Interfaces.).

    Prepare the LUN to receive data

    If you do not want to use the LUN as a raw disk or raw volume, then before Solaris can send data to the LUN, you must perform the following tasks as described in the Solaris operating system documentation:
    Procedure
    1. Partition the LUN.
    2. Create and mount a files system on the partition.

    iSCSI session troubleshooting

    If you receive a connection error when the host is trying to log in to an iSCSI target (iSCSI interface), or you cannot see the LUNs on the target, you can be having problems with the iSCSI session between the initiator and the target.

    If the session cannot be established, or you get unexpected results from the session, follow this procedure:

    Procedure
    1. Use ping with the IP address to verify connectivity from the host to the target’s IP address.
      Using the IP address avoids name resolution issues.
      Note:  You can find the IP address for the target by selecting Storage > Block > iSCSI Interfaces in Unisphere.

      Some switches intentionally drop ping packets or lower their priority during times of high workload. If the ping testing fails when network traffic is heavy, verify the switch settings to ensure the ping testing is valid.

    2. Check the host routing configuration using Unisphere under Settings > Access > Routing.
    3. On the host, verify that the iSCSI initiator service is started.
      Note:  The iSCSI service on the iSCSI interface starts when the system is powered up.
    4. In the Microsoft iSCSI Initiator, verify the following for the target portal:
      • IP address(es) or DNS name of the storage system iSCSI interface with the host’s LUNs.
        Note:  For a host running PowerPath or Windows native failover, the target portal has two IP addresses.
      • Port is 3260, which is the default communications port for iSCSI traffic.
    5. Verify that the iSCSI qualified names (IQN) for the initiators and the iSCSI interface name for the target are legal, globally unique, iSCSI names.
      Note:  An IQN must be a globally unique identifier of as many as 223 ASCII characters.

      For a Linux host initiator — You can find this IQN with the iscsiadm -m node command, which lists the IP address and associated iqn for each iSCSI initiator.

      For a Solaris host initiator — You can find this IQN with the iscsi list initiator-node command.

    6. If you are using optional CHAP authentication, ensure that the following two secrets are identical by resetting them to the same value:
      • The secret for the host initiator in the host’s iSCSI software.
      • The secret for the iSCSI interface on the iSCSI interface.
    7. If you are using optional mutual CHAP authentication, ensure that the following two secrets are identical by resetting them to the same value:
      • The secret for the host initiator in the host’s iSCSI software.
      • The secret for the iSCSI interface on the iSCSI interface. You can find this secret in the CHAP section of the Access Settings page in Unisphere (Settings > Access > CHAP).