• NFS-only VDM import details

    PDF

    NFS-only VDM import details

    NFS-only VDM import work flow

    Most of the NFS-only VDM import operations are executed from the target Unity storage system. However, some initial setup operations, such as creating an import interface on the source VDM, must be executed on the source VNX system.

    Prerequisites for an NFS-only VDM import session

    Before starting an NFS-only VDM import session, the following conditions must be met:

    1. The source VNX system exists, and the VNX1 OE is 7.1.x or later or the VNX2 OE is 8.1.x or later.
    2. (Optional) The specified name for the import session is not used by other import sessions.
    3. The source VDM exists, and is in the loaded state.
    4. The source VDM is not under import or has an associated complete import.
    5. The source VDM is not configured with CIFS servers, secured NFS, or NFSv4.
    6. The maximum allowed deviation in time between the source side Data Mover (DM) that hosts the VDM and the target side SP that hosts the target NAS server is 5 seconds.
    7. There exists only one network interface whose name starts with nas_migration_xx among all the up network interfaces that are attached to the source VDM. This interface is used as the source import interface.
    8. Verify that the physical Data Mover on which the source VDM is located, has at least one IP interface configured that is not attached to the VDM being migrated.
    9. One import interface of type replication exists on the default production port of the target SP, uses the same IP protocol (IPv4 or IPv6) and is in the same VLAN as the source import interface. The target import interface can reach the source import interface and access all the source base exports. This interface can be auto-detected, or specified as the target import interface.
    10. The default target production port exists, and supports the type file.
    11. All the specified target production ports in the interface-port pairs exist, support the type file, and are on the same SP of the default target production port.
    12. All the specified source production interfaces in the interface-port pairs exist, and are in the up state.
    13. The import interface on the source must be dedicated for only the import purpose. Hosts must not use this interface for access. Ensure it is not used for any other purpose, for example, used to export NFS exports or CIFS shares to hosts.
    14. All the specified source file systems in the file system-pool pairs exist, and are valid import candidates. (These import candidates are mounted to the source VDM, and cannot be NMFS, replication destination, root file system, raw file system, and non-imported FLR file system.).
    15. The target pool that is specified to create the target NAS server exists.
    16. All the specified target pools in the file system-pool pairs exist.
    17. All the specified source file systems in the target VMware data store candidate exist, and are valid import candidates.
    18. There is no active import session on the SP of the default target production port.
    19. If the import session is created, the total number of the import candidate source file systems cannot exceed the limit of total file systems of all active sessions.
    20. There is at least one valid source file system that is mounted on the source VDM.
    21. There is at least one valid source production interface, which is up, attached to the source VDM.
    22. The total number of valid source production interfaces cannot exceed the limit of network interfaces for each NAS server on the target Unity system.
    23. The specified file systems to be imported as a VMware NFS data store should have only one export at root directory.
    24. NFS export does not contain an unsupported character or characters, such as comma (,) or double quote (").
    25. There are no NFS exports that were only temporarily un-exported and whose path do not exist anymore. To check for this kind of export run the following command on the control station: nas_server -query:name==vdm147 -fields:exports -format:%q -query:IsShare==False -fields:Path,AlternateName,Options -format:"<Path>%s</Path> <AlternateName>%s</AlternateName> %s " Compare the resulting list with the ouput of server_export. If there are some differences, you must delete the old entry in the VDM in the vdm.cfg file. Do the following:
      1. Login as root on the control station.
      2. Go to the root file system of the VDM (cd /nas/quota/slot_X/root_vdm_xx/.etc).
      3. Edit the file vdm.cfg and remove the line corresponding to the NFS export that you want to clean (vi vdm.cfg export "/fs4" anon=0).
      4. Check that the export does not appear any more in the nas_server -query. You do not need to reboot the VDM.
    26. If the source VNX system is configured with the code page 8859-1 or 8859-15 for the NFSv3 clients, ensure the code page for the Unity system matches the code page being used on the VNX system. With Unity OE 4.3 and later, the code page of the Unity system can be changed through the svc_nas {<NAS_server_name> | all} -param -facility vdm -modify codepage -value <value> service command.
    27. When performing a VDM import of FLR-enabled file systems, the source VNX Data Mover that is running the DHSM service should also be configured with username and password credentials.

    Change settings of an NFS-only VDM import

    You can change some import settings before the import session starts or during an initial copy failed status. The changeable parameters include:

    • Pools of the target file system
    • Pool of the target NAS server
    • Ports of the target production interfaces
    • Mobility interface for import
    • Name of import session
    The import session name is not limited to an import session start or during an initial copy failed status and can be changed at any time.

    The following changes on the source system are discouraged during an import session:

    • Changes to Quota settings
    • NIS or LDAP configuration changes
    • DNS, gateway, or routing changes
    • Creating or deleting file systems
    • File system level FLR properties (on either source or target systems) or epoch year on source file systems
    • Retention settings of DHSM for the specific file systems
    If the source system is configured with auto delete or auto lock enabled, the target system will not enable the respective option until the import session is committed. Also, if expired files exist on the source FLR file systems, after file import, the expired files remain expired. However, the time they expired is not the same as before. The expired time after file import is the time that the file was imported.

    The target system cannot prevent these actions on the source system. However, these actions can result in the changes not being imported to the target system and causing the import session to fail.

    Start an NFS-only VDM import session

    The import session is automatically started after creation in the Unisphere UI.

    For UEMCLI or REST, the UEMCLI command and REST operation of start are shared with the same command and operation of resume to align with the behavior of block import and replication. You can only start an import session when it is in the Initialized state. If the import start fails, the import state is kept as Initial Copy with a Minor failure health state and the health details are set to The migration session failed to provision target resource. At this time, you can fix the problem by getting detailed information from the job and tasks and then resume the import session.

    The start of an import session is an asynchronous operation by default and always returns a success after creating a backend job to run the initial copy. Before the start of the import, a pre-check is done.

    In the event of an SP reboot, affected import sessions fail over to the peer SP. In the event of a system reboot, import sessions pause and automatically resume when the system returns.

    NFS-only VDM import initial copy

    After the start of the import, VDM import enters the initial copy state. The initial copy consists of three sequential stages:

    1. Target NAS server and file system (thin file systems) provisioning
    2. Initial data copy
    3. Configuration import
    Target provisioning

    The following is the sequence of provisioning of the NAS server and file systems (thin file systems) in the target system:

    1. The import session is validated to be initialized and all parameters are set correctly.
    2. The target NAS server is created in import target mode with the correct name.
    3. The target file system or file systems are created in import target mode with correct names that exactly match the source mount path.
    4. The quota information is dumped from the source file system or file systems to the matching target file system or file systems. If the file system is imported as a VMware NFS data store on Unity, quotas import dumping is skipped.
    5. A private server is set up for data transfer from the source NFS server to the target SP. The source VDM exports are updated to include the target import IP address at the file system root directory.
    6. The file system level import session between source side file system and the target side file system are created and started.
    7. The auto-retry mechanism is started.
    Initial copy

    Once the import target NAS server and file systems are provisioned, the initial copy job creates and starts the file system level import sessions for the initial data copy between the source and target file systems. Initial copy does not enter the Configuration Import stage until all file systems are migrated.

    Configuration import

    The initial copy job task continuously runs while waiting for the completion of the baseline copy. Once the baseline copy is complete, it stops the auto-retry mechanism and starts the configuration import (configuration import task). The VDM configuration that is imported ensures that the NAS server on the target system works correctly. The VDM configuration includes:

    • NFS export
    • Network interface
    • Route configuration
    • IP reflect parameter
    • DNS configuration
    • Local file
    • LDAP configuration
    • NIS configuration
    • UNIX Directory Service setting
    • Quota configuration

    Once the configuration import completes, the initial copy job prepares for cutover to reduce the overall time and the DU time for cutting over. After the configuration import completes, the file import session enters the Ready to Cutover state. If import failed during initial copy, the configuration import is not rolled back. You can resume the import operation after fixing the reported issues, and import start can continue from the last point when it failed.

    NFS-only VDM import cutover

    Before you run the cut over operation, ensure the following:

    • The source VDM has not been deleted or renamed before cutover.
    • The file system that is mounted on the source VDM has not been renamed, unmounted, or deleted.
    • The source VDM interface has not been deleted or renamed.
    • When migrating a VNX VDM that is using NIS, ensure that NIS connectivity is enabled before cutover.
      The Unity system firewall can block a NAS server from connecting to a NIS server. It is highly recommended to enable NIS connectivity before cutover. If NIS is not enabled after cutover, a host application may not be able to access the NAS server successfully. In this case, follow the instructions to resolve the access issue.

    To resolve the NAS server access issue due to disabled NIS connectivity, enable NIS. The following example demonstrates how to do this:

    1. Query NAS server with ID nas_6: # uemcli -d localhost -sslPolicy accept -noheader -u admin -p adminpassword /import/session/nas show
                                  1:    ID              = import_1
            Type            = CIFS
            Name            = import_sess_vdm1_APM00151909181_FNM00153800463
            Health state    = OK (5)
            State           = Completed
            Progress        =
            Source system   = RS_65538
            Source resource = vdm1
            Target resource = nas_6
            CIFS local user = cifsuser
      
                                
    2. Query NAS server network interface, for example, if_14: # uemcli -d localhost -sslPolicy accept -noheader -u admin -p adminpassword /net/nas/server -id nas_6 show
                                  1:    ID                            = nas_6
            Name                          = vdm1
            NetBIOS name                  =
            SP                            = spb
            Storage pool                  = pool_1
            Tenant                        =
            Interface                     = if_14
            NFS enabled                   = yes
            NFSv4 enabled                 = no
            CIFS enabled                  = no
            Multiprotocol sharing enabled = no
            Unix directory service        = localThenNis
            Health state                  = OK (5)
      
                                
    3. Query the network interface related to the port, for example, eth2 and VLAN 404: # uemcli -d localhost -sslPolicy accept -noheader -u admin -p adminpassword /net/if -id if_14 show
                                  1:    ID               = if_14
            Type             = file
            NAS server       = nas_6
            Port             = spb_eth2
            VLAN ID          = 404
            IP address       = 10.109.104.133
      
                                
    4. Add firewall rule for NIS server connection.
      • Example for VLAN-enable NAS server network interface: svc_firewall -udp -add eth2.404 10.109.177.170 and svc_firewall -udp -add eth2.404 10.109.177.169
      • Example for VLAN-disabled NAS server network interface: svc_firewall -udp -add eth10 1.2.3.4

    When the initial copy has completed, the file import session enters the Ready to Cutover state. You can switch the production VDM from source to target so that the target side NAS server becomes the production side with all data synchronized. The cutover should be transparent to the users. After cutover, the NFS host clients can access the new production side without requiring a remount.

    You can launch cutover from either Unisphere, UEMCLI, or REST. Cutover starts a job, which does the following:

    1. A pre-cutover validation check.
    2. Freeze the source network lock files for import. The job tries to get all the NLM data from the source VDM and import it to the target system.
    3. Freeze the source file systems (frozen file systems ignore NFS request, NFSv3 lock request, and denies NLM lock).
    4. Turn down the source client interface or interfaces (public IP interface).
    5. Unfreeze source file systems.
    6. Turn up the target interface or interfaces (public IP interface).
    7. Reclaim network lock files in the target system. The job tries to revive all imported lock files in the target system.
    8. Start an incremental copy.

    NFS-only VDM import incremental copy

    Incremental copy starts after the cutover to the target storage system. It synchronizes any data updates in the source after the initial data copy starts and before cutover. During the incremental copy, all data writes to target storage system are synchronized back to the source as well to guarantee that the data is identical between the source VDM and target NAS server. Pause and resume operations are supported during the incremental copy.

    The data change synchronized back to the source storage system cannot be paused.

    During incremental copy, quota import disables online quota check. It is resumed during import committing.

    NFS-only VDM import commit

    When all data is synchronized between the source VDM and target NAS server, the import session enters the Ready to commit state. You can complete the import through Unisphere or by running the commit command in UEMCLI or REST.

    After the commit operation completes, the new data update on the production (target) NAS server is no longer synchronized back to the source VDM. All import specific resources, such as NAS server, file systems, production interfaces, are cleaned up on the target system. The exceptions to this process are the import session information and the summary report. That data is removed when the import related source storage system is deleted from the target storage system. Because the source VDM is obsolete, the import temporary changes on the source VDM are not cleaned up during import commit. You cannot cancel the import session and rollback the imported NAS server to the source VDM after the import session is committed.

    NFS-only VDM import pause

    You can pause an import session during the Initial Copy state (internally provisioning target, initial copy, or import configuration) and the Incremental Copy state through Unisphere, UEMCLI, or REST. This operation is useful when the network load is too heavy. If the import session is in the Initial Copy state, the pause operation fails the job executing the Initial Copy. When the import session is paused, the session state remains unchanged but the health state is not OK. A paused import session can be resumed.

    NFS-only VDM import resume

    You can resume a paused import session through Unisphere, UEMCLI, or REST. The Resume and Start operation (resume an initialized session) actually share the command. Similar to the Pause operation, the Resume command returns immediately, and then the file system level import sessions resume one by one internally. The whole import session returns to the running state when all the underlying file system import sessions are resumed. It may take a while for the import session health state to change to the expected OK state. Use the Resume operation to restart the data transfer or configuration import when the import session fails and the reason for the failure has been fixed.

    NFS-only VDM import cancel

    Any time during the import, except the cutting over and committing phase, you can decide to cancel an ongoing import session. Depending on which state the import session is in, canceling the import session has different meanings:

    • Before import start, the import session is deleted.
    • After the import starts and before cutover:
      • Stops data copy
      • Cleans up the copied data and imported configuration data
      • Cleans up the migrating NAS Server and file systems except the user created file systems
    • After cutover and before committing:
      • Stops data copy and data sync
      • Rollbacks to source VDM
      • Cleans up the copied data and imported configuration data
      • Cleans up the importing NAS server and file systems except the user created file systems

    Hosts that are created for NFS exports during the import are not cleaned up. If the user creates file systems after import cutover, these file systems and the target NAS server are kept while the target production interfaces are removed. If the import to the NAS server is kept due to user created file systems, the imported configuration (UNIX Directory service, DNS) is kept as well.

    View NFS-only VDM import information

    You can view VDM import session information from Unisphere, UEMCLI, or REST. After an import session is created, you can query the progress of the session from UEMCLI or REST. However, this property is only valid when the session is in the Initial Copy phase or Incremental Copy phase:

    • When the VDM import session is in the Initial Copy phase, the progress reflects the progress of the whole initial copy, including initial data copy, configuration import, and quota configuration import.
    • When the VDM import session is in the Incremental Copy phase, the progress just reflects the progress of the incremental data copy.
    Import Summary Report

    The Import Summary Report provides information about the import session. The report can be downloaded during any of the various stages of the Import process (for example, during Initial Copy, Syncing, Paused, Ready to Cutover, Ready to Commit states, Canceled, Completed). This report can be meaningful when reviewing or troubleshooting import session progress. In Unisphere, after creating an import session, go to More Actions > Download Summary Report, which produces a Zip file that can be downloaded to the host system. The most relevant file from the download is the SummaryReport.html.

    File system mount options mapping

    Many of the mount options for VNX storage systems are not supported on Unity storage systems. Table 1 maps the VNX mount options that Unity supports.

    Table 1. Mount option mapping between VNX and Unity
    VNX
    Unity
    Comment
    mover_name
    vdm
    fs_name
    description
    mount_point
    name
    Ro
    rw
    Unity always uses rw. See Ro mount option special handling.
    Rw
    rw
    Unity always uses rw.
    Ro mount option special handling

    For Unity systems with OE version 4.3.x or earlier, if a file system is mounted on a VNX with the Ro option, the NFS exports that are created from this file system are actually read-only to their clients. The clients although are granted read/write or root privilege. After import, the NFS shares that are imported to the Unity system will be ro-exported. If the default access is rw or root, it is degraded to ro (read-only). If default access is na (no access), it remains unchanged. All rw or root host entries that are configured on these NFS shares are degraded to roHosts so that the NFS shares will still be read-only to their clients.

    For example, if source VNX export option is access=<IP>, only this IP has access permission, other clients cannot access the export. The NFS share configuration on Unity would be: defAccess=na, roHost=<IP>.

    If source VNX export option is root=<IP>, the IP has root permission, other clients have rw permission to this export. The NFS share configuration on Unity would be: defAccess=ro, roHost=<IP>.

    For Unity systems with OE version 4.4.x or later, the following enhancements have been incorporated:

    • Another type of host access (Read only, allow Root) of an NFS share has been defined that does not require a host object. However, NFS shares still support host access by registered hosts. Read only, allow Root access means hosts have permission to view the contents of the share, but not to write to it. The root of the NFS client has root access to the share.
    • Clients can be a hostname, netgroup, subnet, a DNS domain, or IP address and must be comma-separated, without spaces. No host object is involved in this simplified host definition.
    • NFS share objects can have either five hosts lists as registered hosts or five host lists as strings. A new attribute named advHostMgmtEnabled has been added that indicates whether host lists are configured using a string or configured by specifying the IDs of registered hosts. For the same NFS share, you can either create a host list by using a string or by selecting registered hosts. You cannot use both methods for creating a host list. When creating a new NFS share through CLI or from Unisphere (in a regular context), the default is to configure host lists using the IDs of registered hosts. In the case of importing an NFS VDM from a VNX system, the imported NFS shares are created in the Unity system with hosts lists configured with strings (because hosts lists on VNX are strings).
    For more information about settings for NFS shares, refer to the Unisphere Command Line Interface User Guide, Unity Unisphere Online Help, and Unity Service Commands Technical Notes.

    NFS export options mapping

    Some of the NFS export options for VNX storage systems are not supported on Unity storage systems. Table 2 maps the VNX NFS export options that Unity supports.

    Table 2. NFS export option mapping between VNX and Unity
    VNX (server_export -option)
    Description
    Unity (/stor/prov/fs/nfs)
    Sec=sec
    AUTH_SYS(default)
    -minSecurity sys
    ro
    Exports the < pathname> for all NFS clients as read-only.
    -defAccess ro
    ro=<client>[:<client>]...
    Exports the < pathname> for the specified NFS clients as read-only.
    -roHosts
    rw=<client>[:<client>]...
    Exports the < pathname> as read-mostly for the specified NFS clients.
    -rwHosts &&-defAccess ro
    root=<client>[:<client>]...
    Provides root privileges for the specified NFS clients.
    -rootHosts or -roRootHosts
    anon=<uid>
    If the NFS request comes from the root (uid=0) on the host and the host access is ro or rw, the anonUID is used on the NFS server as the effective user ID.
    -anonUID
    access=<client>[:<client>]...
    Provides mount access for the specified NFS clients.
    roHosts, rorootHosts, rwHosts
    access=<-client>[:<-client>]
    Excludes the specified NFS clients from access even if they are part of a subnet or netgroup that is allowed access.
    No access
    ro=<-client>[:<-client>]
    Excludes the specified NFS clients from ro privileges.
    No access
    rw=<-client>[:<-client>]
    Excludes the specified NFS clients from rw privileges.
    No access
    root=<-client>[:<-client>]
    Excludes the specified NFS clients from root privileges.
    No access
    -comment
    Comment for the specified NFS export entry.
    -descr