• Import workflow

    PDF

    Import workflow

    Configure import

    You can manually import a VDM (including all its associated file-based storage, network, and configuration information and file systems) from a VNX storage system to a Unity storage system.

    VDM import operations support only:

    • Import of VDM with only NFSV3 protocol enabled (VDMs with NFSV4 protocol enabled are not supported)
    • Import of VDM with only CIFS protocol enabled
    Import of a VDM with multiprotocol file systems, or with both NFS and CIFS file systems exported/shared, is not supported.

    You can also import multiple concurrent block import sessions for LUNs, or Consistency Group (CG) of LUNs, from the VNX system to the Unity system.

    NFS-only VDM import

    The configuration and identity information that can be imported along with the data as part of an NFS-only VDM import include:

    • Networking:
      • IP address configuration
      • Routing configuration
      • VLAN configuration
    • Name Services:
      • DNS
      • LDAP
      • Local files
      • NIS
    • NFS server identity:
      • NFS exports
    • Data:
      • File systems (including quota configuration)
      • Security or permissions for NFSv3
    When the domain configuration is disabled for the source VDM, only the first DNS domain that is configured on the physical Data Mover which is hosting the VDM is imported. If the intended DNS domain for the source VDM is not the first one on the physical Data Mover, the wrong DNS configuration is imported. To avoid importing the wrong DNS domain, enable the Name Service configuration on the source VDM by using the server_nsdomains CLI command on the source VNX. Use this command to enable and set the DNS, LDAP, or NIS configuration on the source VDM to ensure proper import to the Unity system.

    Source VNX VDM Servers, along with their respective file systems (UFS32-based file systems), are migrated to the new UFS64-based file systems format on the target Unity system. In addition, all file systems are migrated as Thin. File systems cannot be migrated individually, only as part of the VDM server migration.

    For a file system that is used by VMware as an NFS data store, you must specify the file systems to be imported as a VMware NFS data store. Only one NFS export at file system root directory should exist on the VNX for those source file systems. Otherwise, both create session and resume session operations fail because VMware NFS data store in Unity only supports this export configuration.

    The target file system is a normal file system by default. If you specify one file system to be imported as a VMware NFS data store, the target file system is VMware NFS data store, which enables virtualization specific optimization such as asyncmtime. A source file system that is specified as file level retention (FLR) enabled cannot be imported as a VMware data store file system.

    All VNX systems are configured by default with the code page 8859-1 for the NFSv3 clients. This code page is used to translate the file name from 8859-1 (network format) to UTF-8 (disk format). This code page can be changed to UTF-8 or 8859-15. Code page 8859-15, which includes the most used western European characters, is an extension of the 8859-1 code page. When a VDM from a VNX system is imported to a Unity system through NFS, the Unity system browses the VNX files using a UTF-8 NFS client. This process can cause some problems with the file name when it includes non-ASCII characters. The NFS import has to browse the VNX files through an 8859-1 NFS client, to preserve the extended characters of filenames of the source VNX. All Unity systems (OE 4.2 and earlier) are configured to support only NFSv3 clients configured for UTF-8. The code page on these Unity systems cannot be changed. With Unity OE 4.3 and later, the code page of the Unity system can be changed through the svc_nas {<NAS_server_name> | all} -param -facility vdm -modify codepage -value <value> service command to match it with the code page used on the VNX system. Matching the code pages allows a code page translation for the file names seen from NFSv3 clients, and to reproduce the behavior of the VNX system on the Unity system.

    The default code page in Unity systems does not need to be changed for either NFSv4 or SMB clients. NFSv4 clients support only UTF-8 and SMB clients support only Unicode.
    Prerequisites for NFS-only VDM import

    Import of a VDM and its related file systems from a VNX storage system to a Unity storage system requires the following prerequisites:

    • Time is synchronized between the VNX and Unity storage systems. The time difference must be within five seconds. Use the same NTP server on the VNX Data Mover that hosts the source VDM, as well as the target Unity SPs. Refer to Configuring Time Services on VNX for information about NTP.
    • One or more pools must be created and available on the target Unity system to perform a VDM import operation. The pools that are selected should be large enough to hold the source VDM and all its file systems that are migrated.
      Compressed data is uncompressed and deduped data is undeduped for migration. Therefore, ensure that the target pool has enough capacity to handle these changes in data size. Check on the source VNX for the amount of space savings compression and deduplication provide for the data, then determine how much space would be required for the uncompressed and undeduped data.
    • Before creating an import connection, you must configure mobility interface IP address for each SP (A and B) of the target system. (When creating an import session, you select the mobility interface IP address of either SPA or SPB to use as the target import interface. This interface is used to migrate the VNX CIFS server and file systems.)
    • Before creating the VDM import session, you must do the following:
      • Create a migration interface on the source Data Mover (for IPv4, use server_ifconfig <server_name> -create -Device <device> -name <nas_migration_interface_name> -protocol IP <ipv4> <ipnetmask> <ipgateway> ; for IPv6, use server_ifconfig <server_name> -create -Device <device> -name <nas_migration_interface_name> -protocol IP6 <ipv6/PrefixLength>) and attach the interface to the source VDM to be migrated (using nas_server -vdm <vdm_name> -attach <nas_migration_interface_name>). The interface added on the source VDM to perform migration must be named with the prefix "nas_migraiton_" so that the interface can be clearly identified by the migration process. This interface is used only for the VDM import operation and must not be used as a production interface. After each VDM import session is committed, this interface can be re-used by attaching it to the next VDM, and so on.
      • Verify that the physical Data Mover on which the source VDM is located has at least one IP interface configured that is not attached to the VDM being migrated. This IP interface ensures that the source Data Mover can provide uninterrupted Name Services for the remaining file servers. If this additional interface is not present, the VDM import session fails.
      • Ensure the code page used on the target Unity system matches the code page used on the source VNX system.
    • For a source VNX system with two Control Stations, the home directory of a user with the Administrator role, which is used in configuring the import connection, must exist on the primary Control Station of the VNX. For more information, see VNX system with two Control Stations .
    • Before conducting a VDM import, an import connection must be created between the source VNX system and the target Unity system.
    If the naming services server (DNS, NIS, or LDAP) configured on the Data Mover of the source VNX cannot be connected using the network interface that is attached to the VDM to be migrated, attach another interface to the VDM. This additional interface ensures that the VDM can connect to the naming services server. Otherwise, the target NAS server cannot connect to the naming services server. If the naming services server configured on the Data Mover of the source VNX can only be connected using the network interface that is attached to the VDM to be migrated, create another network interface on the Data Mover. This additional network interface ensures that the Data Mover can connect to the naming services server. Otherwise, other clients of the Data Mover cannot connect to the naming services server.

    CIFS-only VDM import

    The configuration and identity information that can be imported along with the data as part of a CIFS-only VDM import include:

    • Networking:
      • IP address configuration
      • Routing configuration
      • VLAN configuration
    • Name Services:
      • DNS
    • CIFS server identity:
      • Name
      • Active Directory (AD) account
      • CIFS shares
      • Local group
      • Local users
    • VDM settings:
      • parameters
      • quota
    • Data:
      • File systems
      • File security (Access Control List (ACL) preservation)
      • Timestamps (create date and last modification date are not modified during import)
    When the domain configuration is disabled for the source VDM, the domain name of the SMB server is used to search the corresponding DNS configuration. If the configuration cannot be found, the migration cannot start. Enable the domain configuration for the VDM by using the server_nsdomains CLI command. Use this command to enable and set the DNS, LDAP, or NIS configuration on the source VDM to ensure proper import to the Unity system.

    Source VNX VDM Servers, along with their respective file systems (UFS32-based file systems), are migrated to the new UFS64-based file systems format on the target Unity system. In addition, all file systems are migrated as Thin and maintained as Thin on the target Unity system. File systems cannot be migrated individually, only as part of the VDM server migration.

    Prerequisites for CIFS-only VDM import

    Import of a CIFS-only VDM and its related file systems from a VNX storage system to a Unity storage system requires the following prerequisites:

    • Time is synchronized between the VNX and Unity storage systems. The time difference must be within five seconds. Use the same NTP server on the VNX Data Mover that hosts the source VDM, as well as the target Unity SPs. Refer to Configuring Time Services on VNX for information about NTP.
    • One or more pools must be created and available on the target Unity system to perform a VDM import operation. The pools that are selected should be large enough to hold the source VDM and all its file systems that are migrated.
      Compressed data is uncompressed and deduped data is undeduped for migration. Therefore, ensure that the target pool has enough capacity to handle these changes in data size. Check on the source VNX for the amount of space savings compression and deduplication provide for the data, then determine how much space would be required for the uncompressed and undeduped data.
    • Before creating the VDM import session, you must do the following:
      • Create a migration interface on the source Data Mover (for IPv4, use server_ifconfig <server_name> -create -Device <device> -name <nas_migration_interface_name> -protocol IP <ipv4> <ipnetmask> <ipgateway> ; for IPv6, use server_ifconfig <server_name> -create -Device <device> -name <nas_migration_interface_name> -protocol IP6 <ipv6/PrefixLength>) and attach the interface to the source VDM to be migrated (using nas_server -vdm <vdm_name> -attach <nas_migration_interface_name>). The interface added on the source VDM to perform migration must be named with the prefix "nas_migraiton_" so that the interface can be clearly identified by the migration process. This interface is used only for the VDM import operation and must not be used as a production interface. After each VDM import session is committed, this interface can be re-used by attaching it to the next VDM, and so on.
      • Verify the following:
        • The physical Data Mover on which the source VDM is located has at least one IP interface configured that is not attached to the VDM being migrated. This IP interface ensures that the source Data Mover can provide uninterrupted Name Services for the remaining file servers. If this additional interface is not present, the VDM import session fails.
        • (For a source VDM with an AD joined CIFS server only) The migration interface has been added to the source CIFS server and has a prefix for a DNS domain that is different from the production interface. To add this interface when in a DNS zone, use the following command format: server_cifs <vdm_name> -add compname=<compname>,domain=<domainname>,interface=<nas_migration_interface>,dns=<specific_prefix.domainname>. This command creates an additional zone in the DNS server for hosting the migration IP address. This action ensures that the migration interface will not be used for production. For more information about CLI commands related to the source system, refer to the VNX Command Line Interface Reference for File.
        • A single CIFS server is configured on the source VDM.
        • C$ share is available on the source Data Mover that hosts the VDM and is not disabled or set to Read-only. The C$ share must be available, otherwise the import cannot start. If it was disabled or Read-only on the source, change the corresponding parameters to enable it:
                                                server_param <source_server> -facility cifs -modify admin.shareC_NotCreated -value 0
                                              
                                                server_param <source_server> -facility cifs -modify admin.shareC_RO -value 0
                                              
          You must stop and start the service associated with the CIFS facility for changes to admin.shareC_NotCreated to take effect.
        • No NFS exports are configured on the source VDM file systems.
        • Local CIFS users are enabled on the source CIFS server. (For AD joined CIFS servers only) When not enabled, to enable local users on the source CIFS server use server_cifs <vdmname> -add compname=<computername>,domain=<domainname>,local_users. Refer to Configuring and Managing CIFS on VNX for more information about enabling local user support on VNX.
        • A local user that is a member to the local administrator group of the source CIFS server must exist on the source CIFS server. This user must have backup and restore privileges (by default, being a member of the administrators local group would be enough). Refer to Configuring and Managing CIFS on VNX for information about local user and group accounts.
        • Credentials, username and password, of the remote local CIFS user to use for the import.
        • The extended acl feature is enabled on the source Data Mover that hosts the VDM (parameter cifs.acl.extacl should have bits 2, 3, and 4 set, decimal value 28). Use the following command to view the settings:
                                                server_param <source_datamover> -facility cifs -info acl.extacl
                                              
          If necessary, use the following command to change the setting:
                                                server_param <source_datamover> -facility cifs -modify acl.extacl -value 28
                                              
        • Unknown SID feature has been enabled on the source Data Mover that hosts the VDM (parameter cifs.acl.mappingErrorAction must be set to 0x0b, decimal value 11). Use the following command to view the settings:
                                                server_param <source_datamover> -facility cifs -info acl.mappingErrorAction
                                              
          If necessary, use the following command to change the setting:
                                                server_param <source_datamover> -facility cifs -modify acl.mappingErrorAction -value 11
                                              
        • NT security is enabled on the source. Share and Unix security level is not supported. This is in mount options of the file systems. If necessary, change the mount option of the file systems.
        • The source VDM is not utf8-based.
        • The source CIFS server is not a Windows NT 4.0 like CIFS server.
        • DNS is configured for the windows domain in the case of a domain joined CIFS server.
        • Other VDMs from the source can reach DNS and domain controller (DC) after cutover.
        • DNS and DCs are reachable on the destination after cutover.
    • For a source VNX system with two Control Stations, the home directory of a user with the Administrator role, which is used in configuring the import connection, must exist on the primary Control Station of the VNX. For more information, see VNX system with two Control Stations .
    • Before creating an import connection, you must configure a mobility interface IP address for each SP (A and B) of the target system. (When creating an import session, you select the mobility interface IP address of either SPA or SPB to use as the target import interface. This interface is used to migrate the VNX CIFS server and file systems.)
    • Before conducting a VDM import, an import connection must be created between the source VNX system and the target Unity system.
      • If the naming services server (DNS) configured on the Data Mover of the source VNX cannot be connected using the network interface that is attached to the source VDM, attach another interface to the VDM. This additional interface ensures that the source VDM can connect to the naming services server.
      • If the naming services server can only be connected using the network interface that is attached to the source VDM, create another network interface on the Data Mover. This additional network interface ensures that the Data Mover can connect to the naming services server as well as other clients of the Data Mover.

    Block import

    Unisphere allows you to import multiple concurrent block import sessions for either LUNs or Consistency Group (CG) of LUNs, from the VNX system to the Unity system. This limit is based on the SAN Copy limits of the source VNX system and is also based on the number of members in each CG. Block import uses the SAN Copy feature on the VNX storage system to push data to the Unity storage system. Use the VNX management IP address and VNX Admin credentials to configure a remote system connection from the target Unity system to the source VNX system. The VNX SAN Copy FC or iSCSI initiators is discovered through this connection and the Unity system is registered as a SANCopy host. Also, all block resources that are eligible for import are discovered, which includes:

    • Pool LUNs, Thin LUNs, and Meta LUNs that are:
      • Not reserved LUN pool LUNs
      • Not LUNs exposed to VNX file
      • Not System LUNs
    • CGs of LUNs

    Before you perform an import of a LUN or CG of LUNs, the reserved LUN pool (RLP) on the source VNX system should contain at least one free LUN for each LUN planned for import. Each reserved LUN can vary in size. However, using the same size for each LUN in the pool is easier to manage because the LUNs are assigned without regard to size. That is, the first available free LUN in the global reserved LUN pool is assigned. Since you cannot control which reserved LUNs are being used for a particular import session or a VNX process such as SnapView™, incremental SAN Copy, or MirrorView/A, use a standard size for all reserved LUNs. To assist in estimating a suitable reserved LUN pool size for the storage system, consider the following:

    • If you want to optimize space utilization, use the size of the smallest source LUN as the basis of the calculations. If you want to optimize the total number of source LUNs, use the size of the largest source LUN as the basis of the calculations.
    • If you have a standard online transaction processing configuration (OLTP), use reserved LUNs sized at 10-20%. This size tends to be appropriate to accommodate the copy-on-first-write activity.
    • If you are also using SnapView or MirrorView/A on the VNX LUN, then you may need additional RLP LUNs.
    If free reserved LUNs are not available in the RLP, the import session enters an error state. Add space to the RLP, after which the session can resume. The additional space does not have to be as high as the number of source LUNs. The RLP LUNs get re-used once an import session completes.

    For more information about the RLP and reserved LUNs, refer to the Unisphere online help on the source VNX.

    Block import prerequisites

    The preparation for block import (either LUN or CG of LUNs) is different than the preparation for file-based (VDM) import. Import of one or more LUNs or a CG of LUNs from a VNX storage system to a Unity storage system requires the following prerequisites:

    • SAN Copy is enabled on the VNX storage system
    • For FC-based import:
      • Ports zoning is configured between the VNX and Unity storage systems.
    • For iSCSI-based import:
      • iSCSI interfaces are configured on both the VNX and Unity storage processors.
      • From the VNX storage system - iSCSI IP connections are configured between the source VNX SPs and target Unity SPs as pairs. For example, VNX SPA is paired to Unity SPA and VNX SPB is paired to Unity SPB. Also, verify the connection configuration.
    • A reserved LUN pool (RLP) is configured with LUNs based on LUNs planned for the import. Refer to the existing VNX Unisphere online help for detailed information concerning RLP.
    • Hosts are configured on the Unity storage system the same as block hosts or storage groups on the source VNX storage system from which resources are imported. If needed, you can reconfigure host access on the Unity system.
    Do not use ports that are used by MirrorView for either FC-based or iSCSI-based import.

    Configure a VDM or block import

    To configure import for block or file storage resources, use the native Import feature in Unisphere. Complete the following steps:

    1. Configure the mobility (import) interfaces on each SP of the target systems.
      Although these interfaces are needed only for importing a VDM and its related file systems, if you use the same import connection for both file and block import sessions, the interfaces must be configured.
    2. Configure an import connection.
    3. Create an import session for the storage resource.
    The interfaces are only required for file import operations, and not for block imports.

    About protection and mobility interfaces

    Protection and mobility interfaces may be shared network interfaces for replication-related and import data or management traffic using the virtual management port. Each storage processor (SP) must have one or more interfaces. Although import only requires one interface, the creation of an interface on both SPA and SPB is enforced.

    When you create a VDM import session, the VDM-to-Unity SP interfaces are paired up. Configure these interfaces before you create an associated connection. Interfaces are needed only for the import of a VDM and its file systems.

    The import connection cannot use the replication (MirrorView) interfaces on the VNX.

    From a replication perspective, if an interface is shared between replication and import, you must remove all import sessions to change the interface and remove both replication and import sessions before deleting the interface.

    About import connections

    Import requires a configured connection between the source system and target system. This connection is called an import connection. An import connection handles either a block import or a VDM and its file systems import.

    On the target Unity system, connections must be defined separately for replication and import but Interfaces can be shared between replication and import. Import connections are not verified until the session is created

    For block, the initiators must be pushed from the VNX (VNX1 or VNX2) source system to the Unity target system.

    Before creating a mobility interface or import connection for a VNX import, configure the FC zoning when an FC connection is used for block import. If the connection uses iSCSI, an iSCSI connection is required between the source VNX system to the target Unity system.

    Once an import connection is created, the Unity target system automatically discovers both file and block resources on the source system.

    The SAN Copy Enabler must be installed on the VNX1 storage system so that block resources on the system can be discovered automatically. Otherwise, only the file resources are discovered when the import connection is created. VNX2 storage systems include the SAN Copy Enabler already installed.

    About import sessions

    An import session uses a configured import connection and associated interfaces to establish an end-to-end path for importing data between the source and target storage resources. The basic operations for an import session are:

    1. Create
    2. Resume or Pause
    3. Cutover
    4. Commit
    5. Cancel
    For a block import session, the commit operation is performed automatically at the end of cutover.

    You can cancel an import session that is in any state (with the exceptions of Canceling, Canceled, or Cutting Over) before the commit state. For a VDM (either NFS or CIFS) import session, the cancel operation rolls back the VDM and related file systems to the source VNX. The cancel operation also deletes the target file systems. If no file systems exist on the NAS server, the cancel operation deletes the NAS server. For a LUN or consistency group (CG) import session, the cancel operation deletes the SAN Copy session for each set of LUN pairs in the import session. It also disables SAN Copy access to the target LUNs and deletes the target LUNs or CG associated with the import session.

    You cannot upgrade a Unity system when an import session is in progress or create an import session when an upgrade session is in progress.