• Setting Up a Unix Host to Use Fibre Channel (FC) Storage

    PDF

    Setting Up a Unix Host to Use Fibre Channel (FC) Storage

    Requirements for setting up a host

    These system and network requirements must be met before setting up a host to use Unity storage.

    Before you can set up a host to use Unity storage, the following storage system and network requirements must be met.

    SAN requirements

    For a host to connect to FC LUNs or VMware VMFS and Block VVol datastores on the Unity system, the host must be in a SAN environment with the storage system, and zoned so that the host and the storage system are visible to each other over the SAN. For a multi-pathing environment, each Unity FC LUN for the host must have two paths associated with it. These two paths should be on different switches to ensure high availability.

    Path management SAN requirements

    When implementing a highly-available SAN between a host and the Unity system, keep in mind that:

    • A LUN or VMware VMFS datastore is visible to both SPs.
    • You can configure multiple paths for a LUN. These paths should be associated with separate physical ports on the same SP.
    • Each LUN must present the same LUN ID to all hosts.
    Directly attaching a host to a storage system is supported if the host connects to both SPs and has the required multipath software.

    Storage system requirements

    • Install and configure the system using the Initial Configuration wizard.
    • Use Unisphere or the CLI to configure NAS servers or interfaces, or iSCSI or Fibre Channel (FC) LUNs, on the storage system.
    On an HP-UX host, the iSCSI initiator will not discover the FC storage if it does not detect a LUN from the storage system assigned to host LUN ID 0. We recommend that you create a unique target, create a LUN on this interface, and give it access to the HP-UX host. The first LUN that you assign to a host is automatically assigned host LUN ID 0.

    Using multi-path management software on the host

    Multi-path management software manages the connections (paths) between the host and the storage system should one of the paths fail. The following types of multi-path managements software are available for a host connected to a storage system:

    • EMC PowerPath software on an HP-UX, Linux, or Solaris host
    • Native mulitpath software on a Citrix XenServer, HP-UX 11i, Linux, or Solaris host

    For compatibility and interoperability information, refer to the Unity Support Matrix on the support website.

    Setting up a system for multi-path management software

    For a system to operate with hosts running multi-path management software, each LUN on the system should be associated with two paths.

    Installing PowerPath

    Procedure
    1. On the host or virtual machine, download the latest PowerPath version from the PowerPath software downloads section on the EMC Online Support website.
    2. Install PowerPath using a Custom installation and the Celerra option, as described in the appropriate PowerPath installation and administration guide for the host’s or virtual machine’ operating system.
      This guide is available on EMC Online Support. If the host or virtual machine is running the most recent version and a patch exists for this version, install it, as described in the readme file that accompanies the patch.
    3. When the installation is complete, reboot the host or virtual machine.
    4. When the host or virtual machine is back up, verify that the PowerPath service has started.

    Installing native multipath software

    Whether you need to install multipath software, depends on the host’s operating system.

    Citrix XenServer

    By default XenServer uses the Linux native multipathing (DM-MP) as it multipath handler. This handler is packaged with the Citrix XenServer operating system software.

    Linux

    To use Linux native multipath software, you must install the Linux multipath tools package as described in Installing or updating the Linux multipath tools package.

    HP-UX 11i

    Native multipath failover is packaged with the HP-UX operating system software.

    Solaris

    Sun’s native path management software is Sun StorEdge™ Traffic Manager (STMS).

    For Solaris 10 — STMS is integrated into the Solaris operating system patches you install. For information on install patches, refer to the Sun website.

    Installing or updating the Linux multipath tools package

    To use Linux native multipath failover software, the Linux multipath tools package must be installed on the host. This package is installed by default on SuSE SLES 10 or higher, but is not installed by default on Red Hat.

    If you need to install the multipath tools package, install the package from the appropriate website below.

    For SuSE:

    http://www.novell.com/linux/

    The multipath tools package is included with SuSE SLES 9 SP3 and you can install it with YaST or RPM.

    For Red Hat:

    http://www.redhat.com

    The multipath tools package is included with Red Hat RHEL4 U3 or RHEL5, and you can install it with YaST or Package Manager. If an update is available, follow the instructions for installing it on the http://www.novell.com/linux/ or http://www.redhat.com website.

    What's next?

    Do one of the following:

    AIX host — Setting up for FC storage

    To set up an AIX host to use LUNs over Fibre Channel, perform these tasks:

    1. Install Celerra AIX software
    2. Configure LUNs as AIX disk drives
    3. Prepare the LUNs to receive data

    Install Celerra AIX software

    Procedure
    1. Log in to the AIX host using an account with administrator privileges.
    2. Download the AIX ODM Definitions software package to the /tmp directory on the AIX host as follows:
      1. Navigate to AIX ODM Definitions on the software downloads section on the Support tab of the EMC Powerlink website.
      2. Choose the version of the EMC ODM Definitions for the version of AIX software running on the host, and save the software to the /tmp directory on the host.
    3. Start the System Management Interface Tool to install the software:
                                    smit installp
                                  
    4. In the /tmp directory, uncompress and untar the EMC AIX fileset for the AIX version running on the host:
                                    uncompress EMC.AIX.x.x.x.x.tar.z
           			tar -xvf EMC.AIX.x.x.x.x.tar
                                  
    5. In the Install and Update Software menu, select Install and Update from ALL Available Software and enter /tmp as the path to the software.
    6. Select SOFTWARE to install.
    7. After making any changes to the displayed values, press Enter.
    8. Scroll to the bottom of the window to see the Installation Summary, and verify that the message “SUCCESS” appears.
    9. Reboot the AIX host to have the changes take effect.

    Configure LUNs as AIX disk drives

    Install the ODM (Object DatInstall the a Management) kit on the AIX host:
    Procedure
    1. Remove any drives that are identified as "Other FC SCSI Disk Drive" by the system by running the following command.
                                    lsdev -Cc disk | grep “Other FC SCSI Disk Drive” | awk {‘print $1’} | xargs -n1 rmdev -dl
                                  
    2. When applicable, uninstall any existing CLARiiON ODM file sets.
                                    installp -u EMC.CLARiiON.*
                                  
    3. Use the following commands to download the AIX ODM package version 5.3.x or 6.0.x from the FTP server at ftp.emc.com.
      IBM AIX Native MPIO for Unity requires a different ODM package. Contact your service provider for more information.
      1. Access the FTP server by issuing the following command:
                                          ftp ftp.emc.com
                                        
      2. Log in with a user name of anonymous and use your email address as a password.
      3. Access the directory that contains the ODM files:
                                          cd /pub/elab/aix/ODM_DEFINITIONS
                                        
      4. Download the ODM package
                                          get EMC.AIX.5.3.x.x.tar.Z
                                        

        or

                                          get EMC.AIX.6.0.x.x.tar.Z
                                        
    4. Prepare the files for installation.
      1. Move the ODM package into the user install directory.
                                          cd /usr/sys/inst.images
                                        
      2. Uncompress the files.
                                          uncompress EMC.AIX.5.3.x.x.tar.Z
                                        
        or
                                          uncompress EMC.AIX.6.0.x.x.tar.Z
                                        
      3. Open, or untar, the files.
                                          tar -xvf EMC.AIX.5.3.x.x.tar
                                        
        or
                                          tar -xvf EMC.AIX.6.0.x.x.tar
                                        
      4. Create or update the TOC file.
                                          inutoc
                                        
    5. Install the files.
      • PowerPath:
                                        installp -ac -gX -d . EMC.CLARiiON.aix.rte
        installp -ac -gX -d . EMC.CLARiiON.fcp.rte
        
                                      
      • MPIO:
                                        installp -ac -gX -d . EMC.CLARiiON.aix.rte
        installp -ac -gX -d . EMC.CLARiiON.fcp.MPIO.rte
        
                                      
      You can also install the files using the AIX smitty command.

    Scan and verify LUNs

    This task explains how to scan the system for LUNs using AIX, PowerPath, or MPIO.

    Before you begin
    After installing the AIX ODM package for Unity, scan and verify LUNs on the Unity system.
    Procedure
    1. Use AIX to scan for drives using the following command:
                                        cfgmgr
                                      
    2. Verify that all FC drives have been configured properly, and display any unrecognized drives.
                                        lsdev -Cc disk
                                      

      PowerPath output example:

                                        hdisk1      Available          EMC CLARiiON FCP VRAID Disk
      hdisk2      Available          EMC CLARiiON FCP VRAID Disk
                                      

      MPIO output example:

                                        hdisk1      Available          EMC CLARiiON FCP MPIO VRAID Disk
      hdisk2      Available          EMC CLARiiON FCP MPIO VRAID Disk
                                      

    Prepare the LUNs to receive data

    If you do not want to use a LUN as a raw disk or raw volume, then before AIX can send data to the LUN, you must either partition the LUN or create a database file systems on it. For information on how to perform these tasks, refer to the AIX operating system documentation.

    Citrix XenServer host — Setting up for FC storage

    To set up a Citrix XenServer host to use LUNs over Fibre Channel, perform these tasks:

    1. Configure the FC target
    2. Configure the FC target for multipathing

    Configure the FC target

    The XenServer operating system includes FC software that you must configure for each initiator that will connect to the FC storage.
    Procedure
    1. Open the XenCenter console.
    2. Click New Storage at the top of the console.
    3. In the New Storage dialog box, under Virtual disk storage, select Hardware HBA.
    4. Under Name, enter a descriptive name for the LUN (Storage Repository).
    5. Click Next.
    6. Select a LUN, and click Finish.
      The host scans the target to see if it has any XenServer Storage Repositories (SRs) on it already, and if any exist you are asked if you want to attach to an existing SR or create a new SR.

    Configure the FC target for multipathing

    Citrix recommends either enabling multipathing in XenCenter before you connect the pool to the storage device or if you already created the storage repository, putting the host into Maintenance Mode before you enable multipathing.

    If you enable multipathing while connected to a storage repository, XenServer may not configure multipathing successfully. If you already created the storage repository and want to configure multipathing, put all hosts in the pool into Maintenance Mode before configuring multipathing and then configure multipathing on all hosts in the pool. This ensures that any running virtual machines that have LUNs in the affected storage repository are migrated before the changes are made.

    Procedure
    1. In XenCenter enable the multipath handler:
      1. On the host’s Properties dialog box, select the Multipathing tab.
      2. On the Multipathing tab, select Enable multipathing on this server.
    2. Verify that multipathing is enabled by clicking the storage resource’s Storage general properties.

    HP-UX host — Setting up for FC storage

    Download and install the HP-UX FC HBA software

    Procedure
    1. On the HP-UX host, open a web browser and download the initiator software from the HP-UX website.
    2. Install the initiator software using the information on the site or that you downloaded from the site.

    Make the storage processors available to the host

    Verify that each NIC sees only the storage processors (targets) to which it is connected:

                            ioscan -fnC disk
    insf -e
                          

    ioscan -NfC disk (for HP-UX 11i v3 only)

    Verify that native multipath failover sees all paths the LUNs

    If you are using multipath failover:
    Procedure
    1. Rescan for the LUNs:
                                    ioscan -NfC disk|
      insf -e
                                  
    2. View the LUNs available to the host:
                                    ioscan -NfnC disk
                                  
    3. Verify that all paths to the storage system are CLAIMED:
                                    ioscan -NkfnC lunpath
                                  

    Prepare the LUNs to receive data

    If you do not want to use a LUN as a raw disk or raw volume, then before HP-UX can send data to the LUN, perform the following tasks as described in the HP-UX operating system documentation:
    Procedure
    1. Make the LUN visible to HP-UX.
    2. Create a volume group on the LUN.

    Linux host — Setting up for FC storage

    To set up a Linux host to use LUNs over Fibre Channel, perform these tasks:

    Scan the storage system for LUNs

    Execute the Linux scan LUNs command.

    Before scanning the storage system for attached LUNs, they may appear in Linux as LUNZ, even after host access is granted to those LUNs. For example:
                                # lsscsi |egrep -i dgc 
    
    [13:0:2:0]   disk    DGC      LUNZ             4200  /dev/sdj 
    [13:0:4:0]   disk    DGC      LUNZ             4200  /dev/sdo 
    [13:0:5:0]   disk    DGC      LUNZ             4200  /dev/sdv 
    [13:0:6:0]   disk    DGC      LUNZ             4200  /dev/sdz 
    [14:0:2:0]   disk    DGC      LUNZ             4200  /dev/sdm 
    [14:0:4:0]   disk    DGC      LUNZ             4200  /dev/sdu 
    [14:0:5:0]   disk    DGC      LUNZ             4200  /dev/sdx 
    [14:0:6:0]   disk    DGC      LUNZ             4200  /dev/sdy 
    [15:0:2:0]   disk    DGC      LUNZ             4200  /dev/sdac
    [15:0:4:0]   disk    DGC      LUNZ             4200  /dev/sdag
                      ………
                              
    The first column in the output shows [ Host:Bus:Target:LUN] of each SCSI device, with the last value representing the LUN number.
    Procedure
    1. In Unisphere, grant LUN access to the Linux host.
      Ensure that at a LUN with LUN ID 0 is present on the Unity system. See Modify Host LUN IDs for information on manually changing LUN IDs.
    2. On the Linux server, run the SCSI bus scan command with the -r option:
                                    rescan-scsi-bus.sh -a -r
                                  
    3. On the Linux server, rerun the lsscsi |egrep -i dgc command to verify the LUN IDs show up appropriately on the Linux host.
                                    # lsscsi |egrep -i dgc 
      
      [13:0:2:0]   disk    DGC      VRAID            4200  /dev/sdbl
      [13:0:2:1]   disk    DGC      VRAID            4200  /dev/sdcf
      [13:0:2:2]   disk    DGC      VRAID            4200  /dev/sdcg
      [13:0:4:0]   disk    DGC      VRAID            4200  /dev/sdad
      [13:0:4:1]   disk    DGC      VRAID            4200  /dev/sdch
      [13:0:4:2]   disk    DGC      VRAID            4200  /dev/sdci
      [13:0:5:0]   disk    DGC      VRAID            4200  /dev/sdbj
      [13:0:5:1]   disk    DGC      VRAID            4200  /dev/sdcj
      [13:0:5:2]   disk    DGC      VRAID            4200  /dev/sdck
                      ………
                                  
    4. If LUNZ continues to display, rerun the rescan command using the --forcerescan option.
                                    rescan-scsi-bus.sh --forcerescan
                                  
      If the issue persists and LUNZ still displays, a Linux reboot may be required in order for Linux to recognize the LUNs. Refer to the following Linux knowledgebase article for more information: https://www.suse.com/support/kb/doc/?id=7009660

    Set up the Linux host to use the LUN

    Perform the following tasks as described in the Linux operating system documentation:
    Procedure
    1. Find the LUN ID:
      1. In Unisphere, select Storage > Block > LUNs.
      2. On the LUN, select Edit.
      3. On the Properties window, select Access > Access details to determine the LUN ID.
    2. On the host, partition the LUN.
    3. Create a file system on the partition.
    4. Create a mount directory for the file system.
    5. Mount the file system.
    Results
    The Linux host can now write data to and read data from the file system on the LUN.

    Solaris host — Setting up for FC storage

    To set up a Solaris host to use LUNs over Fibre Channel, perform these tasks:

    1. Configure Sun StorEdge Traffic Manager (STMS)
    2. Prepare the LUN to receive data

    Configure Sun StorEdge Traffic Manager (STMS)

    If you plan to use STMS on the host to manage the paths to the LUNs, you must first configure it:
    Procedure
    1. Enable STMS by editing the following configuration file:
      Solaris 10 — Do one of the following:
      • Edit the /kernel/drv/fp.conf file by changing the mpxio-disable option from yes to no.

        or

      • Execute the following command:
                                          stmsboot -e
                                        
    2. We recommend that you enable the STMS auto-restore feature to restore LUNs to their default SP after a failure has been repaired. In Solaris 10, auto-restore is enabled by default.
    3. If you want to install STMS offline over NFS, share the root file system of the target host in a way that allows root access over NFS to the installing host, if you want to install STMS offline over NFS. You can use a command such as the following on target_host to share the root file system on target_host so that installer_host has root access:
                                    share -F nfs -d ‘root on target_host‘ -o ro,rw=installer host,root=installer_host /
                                  
      If the base directory of the package (the default is /opt) is not part of the root file system, it also needs to be shared with root access.
    4. For the best performance and failover protection, we recommend that you set the load balancing policy to round robin:
                                    setting load-balance=”round-robin”
                                  

    Prepare the LUN to receive data

    If you do not want to use the LUN as a raw disk or raw volume, then before Solaris can send data to the LUN, you must perform the following tasks as described in the Solaris operating system documentation:
    Procedure
    1. Partition the LUN.
    2. Create and mount a files system on the partition.