• Manage Storage

    PDF

    On this page

    Manage Storage

    Configure custom pools

    Pools are the groups of drives on which you create storage resources. Configure pools based on the type of storage resource and usage that will be associated with the pool, such as file system storage optimized for database usage. The storage characteristics differ according to the following:

    • Type of drive used to provide the storage.
    • (dual-SP virtual deployments only) RAID level implemented for the storage.
    Note:  Before you create storage resources, you must configure at least one pool.

    The following table lists the attributes for pools:

    Table 1. Custom pool attributes
    Attribute
    Description
    ID
    ID of the pool.
    Name
    Name of the pool.
    Type
    Pool type. Valid values are:
    • Dynamic
    • Traditional
    Description
    Brief description of the pool.
    Total space
    Total storage capacity of the pool.
    Current allocation
    Amount of storage in the pool allocated to storage resources.
    Preallocated space
    Amount of storage space reserved in the pool by storage resources for future needs to make writes more efficient. The pool may be able to reclaim some of this space if total pool space is running low. This value equals the sum of the sizePreallocated values of each storage resource in the pool.
    Remaining space
    Amount of storage in the pool not allocated to storage resources.
    Subscription
    For thin provisioning, the total storage space subscribed to the pool. All pools support both standard and thin provisioned storage resources. For standard storage resources, the entire requested size is allocated from the pool when the resource is created, for thin provisioned storage resources only incremental portions of the size are allocated based on usage. Because thin provisioned storage resources can subscribe to more storage than is actually allocated to them, pools can be over provisioned to support more storage capacity than they actually possess.
    Note:   The system automatically generates an alert when the total pool usage reaches 85% of the pool's physical capacity. -alertThreshold specifies the alert threshold value.
    Subscription percent
    For thin provisioning, the percentage of the total space in the pool that is subscription storage space.
    Alert threshold
    Threshold for the system to send an alert when hosts have consumed a specific percentage of the subscription space. Value range is 50 to 85.
    Drives
    List of the types of drives on the system, including the number of drives of each type, in the pool. If FAST VP is installed, you can mix different types of drives to make a tiered pool. However, SAS Flash 4 drives must be used in a homogeneous pool.
    Number of drives
    Total number of drives in the pool.
    Number of unused drives
    Number of drives in the pool that are not being used.
    RAID level (physical deployments only)
    RAID level of the drives in the pool.
    Stripe length (physical deployments only)
    Number of drives the data is striped across.
    Rebalancing
    Indicates whether a pool rebalancing is in progress. Valid values are:
    • yes
    • no
    Rebalancing progress
    Indicates the progress of the pool rebalancing as a percentage.
    System defined pool
    Indication of whether the system configured the pool automatically. Valid values are:
    • yes
    • no
    Health state
    Health state of the pool. The health state code appears in parentheses. Valid values are:
    • Unknown (0) - Health is unknown.
    • OK (5) - Operating normally.
    • OK BUT (7) - Pool has exceeded its user-specified threshold or the system specified threshold of 85%.
    • Degraded/Warning (10) - Pool is operating, but degraded due to one or more of the following:
      • Pool has exceeded the user-specified threshold.
      • Pool is nearing capacity.
      • Pool is almost full.
      • Pool performance has degraded.
    • Major failure (20) - Dirty cache has made the pool unavailable.
    • Critical failure (25) - Pool is full. To avoid data loss, add more storage to the pool, or create more pools.
    • Non-recoverable error (30) - Two or more drives in the pool have failed, possibly resulting in data loss.
    Health details
    Additional health information. See Appendix A, Reference, for health information details.
    FAST Cache enabled (physical deployments only)
    Indicates whether FAST Cache is enabled on the pool. Valid values are:
    • yes
    • no
    Non-base size used
    Quantity of storage used for thin clone and snapshot data.
    Auto-delete state
    Indicates the state of an auto-delete operation on the pool. Valid values are:
    • Idle
    • Running
    • Could not reach LWM
    • Could not reach HWM
      Note:   If the auto-delete operation cannot satisfy the high water mark, and there are snapshots in the pool, the auto-delete operation sets the auto-delete state for that watermark to Could not reach HWM , and generates an alert.
    • Failed
    Auto-delete paused
    Indicates whether an auto-delete operation is paused. Valid values are:
    • yes
    • no
    Auto-delete pool full threshold enabled
    Indicates whether the system will check the pool full high water mark for auto-delete. Valid values are:
    • yes
    • no
    Auto-delete pool full high water mark
    The pool full high watermark on the pool.
    Auto-delete pool full low water mark
    The pool full low watermark on the pool.
    Auto-delete snapshot space used threshold enabled
    Indicates whether the system will check the snapshot space used high water mark for auto-delete. Valid values are:
    • yes
    • no
    Auto-delete snapshot space used high water mark
    High watermark for snapshot space used on the pool.
    Auto-delete snapshot space used low water mark
    Low watermark for snapshot space used on the pool.
    Data Reduction space saved (physical deployments only)
    Storage size saved on the pool by using data reduction.
    Note:  Data reduction is available for thin LUNs and thin file systems in an All-Flash pool only. The thin file systems must be created on Unity systems running version 4.2.x or later.
    Data Reduction percent (physical deployments only)
    Storage percentage saved on the pool by using data reduction.
    Note:  Data reduction is available for thin LUNs and thin file systems in an All-Flash pool only. The thin file systems must be created on Unity systems running version 4.2.x or later.
    Data Reduction ratio (physical deployments only)
    Ratio between data without data reduction and data after data reduction savings.
    Note:  Data reduction is available for thin LUNs and thin file systems in an All-Flash pool only. The thin file systems must be created on Unity systems running version 4.2.x or later.
    All flash pool
    Indicates whether the pool contains only Flash drives. Valid values are:
    • yes
    • no

    Create pools

    Create a dynamic or traditional pool:

    • Both traditional pools and dynamic pools are supported in the CLI and REST API for Unity All-Flash models running OE version 4.2.x or later. The default pool type is dynamic.
    • Traditional pools are supported in all Unity hybrid and virtual models. They are also supported in Unity All-Flash models running OE version 4.1.x or earlier.
    Format
    /stor/config/pool create [-async] -name <value> [-type {dynamic | traditional}] [-descr <value>] {-diskGroup <value> -drivesNumber <value> [-storProfile <value>] | -disk <value>} [-alertThreshold <value>] [-snapPoolFullThresholdEnabled {yes|no}] [-snapPoolFullHWM <value>] [-snapPoolFullLWM <value>] [-snapSpaceUsedThresholdEnabled {yes|no}] [-snapSpaceUsedHWM <value>] [-snapSpaceUsedLWM <value>]
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    Note:  Simultaneous commands, asynchronous or synchronous, may fail if they conflict in trying to manage the same system elements.
    -name
    Type a name for the pool.
    -type
    (Available only for systems that support dynamic pools) Specify the type of pool to create. Value is one of the following:
    • dynamic
    • traditional

    Default value is dynamic.

    -descr
    Type a brief description of the pool.
    -storProfile (physical deployments only)
    Type the ID of the storage profiles, separated by commas, to apply to the pool, based on the type of storage resource that will use the pool and the intended usage of the pool. View storage profiles (physical deployments only) explains how to view the IDs of available storage profiles on the system. If this option is not specified, a default RAID configuration is selected for each particular drive type in the selected drive group: NL-SAS (RAID 6 with a stripe length of 8), SAS (RAID 5 with a stripe length of 5), or Flash (RAID 5 with a stripe length of 5).
    -diskGroup (physical deployments only)
    Type a comma-separated list of IDs of the drive groups to use in the pool. Specifying drive groups with different drive types causes the creation of a multi-tier pool. View drive groups explains how to view the IDs of the drive groups on the system.
    -drivesNumber (physical deployments only)
    Specify the drive numbers, separated by commas, from the selected drive groups to use in the pool. If this option is specified when -storProfile is not specified, the operation may fail when the -drivesNumber value does not match the default RAID configuration for each drive type in the selected drive group.
    -disk (virtual deployments only)
    Specify the list of drive IDs, separated by commas, to use in the pool. Specified drives must be reliable storage objects that do not require additional protection.
    -alertThreshold
    For thin provisioning, specify the threshold, as a percentage, when the system will alert on the amount of subscription space used. When hosts consume the specified percentage of subscription space, the system sends an alert. Value range is 50% to 85%.
    -FASTCacheEnabled (physical deployments only)
    Specify whether to enable FAST Cache on the pool. Value is one of the following:
    • yes
    • no
    Default value is yes.
    -snapPoolFullThresholdEnabled
    Indicate whether the system should check the pool full high water mark for auto-delete. Value is one of the following:
    • yes
    • no
    Default value is yes.
    -snapPoolFullHWM
    Specify the pool full high watermark for the pool. Valid values are 1-99. Default value is 95.
    -snapPoolFullLWM
    Specify the pool full low watermark for the pool. Valid values are 0-98. Default value is 85.
    -snapSpaceUsedThresholdEnabled
    Indicate whether the system should check the snapshot space used high water mark for auto-delete. Value is one of the following:
    • yes
    • no
    Default value is yes.
    -snapSpaceUsedHWM
    Specify the snapshot space used high watermark to trigger auto-delete on the pool. Valid values are 1-99. Default value is 95.
    -snapSpaceUsedLWM
    Specify the snapshot space used low watermark to trigger auto-delete on the pool. Valid values are 0-98. Default value is 20.
    Note:  Use the Change disk settings (virtual deployments only) command to change the assigned tiers for specific drives.
    Example 1 (physical deployments only)

    The following command creates a dynamic pool. This example uses storage profiles profile_1 and profile_2, six drives from drive group dg_2, and ten drives from drive group dg_28. The configured pool receives ID pool_2.

    Note:  Before using the stor/config/pool create command, use the /stor/config/profile show command to display the dynamic pool profiles and the /stor/config/dg show command to display the drive groups.
    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! uemcli /stor/config/pool create -name MyPool -descr "dynamic pool" -diskGroup dg_2,dg_28 -drivesNumber 6,10 -storProfile profile_1,profile_2
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    ID = pool_2
    Operation completed successfully.
                            
    Example 2 (physical deployments only)

    The following command creates a traditional pool in models that support dynamic pools. This example uses storage profiles tprofile_1 and tprofile_2, five drives from drive group dg_3, and nine drives from drive group dg_28. The configured pool receives ID pool_6.

    Note:  Before using the stor/config/pool create command, use the /stor/config/profile -traditional show command to display the traditional pool profiles (which start with "t") and the /stor/config/dg show command to display the drive groups.
    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool create -name MyPool -descr "traditional pool" -diskGroup dg_3,dg_28 -drivesNumber 5,9 -storProfile tprofile_1,tprofile_2 -type traditional
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    ID = pool_6
    Operation completed successfully.
                            
    Example 3 (physical deployments only)

    The following command creates a traditional pool in models that do not support dynamic pools. This example uses storage profiles profile_19 and profile_20, five drives from drive group dg_15, and nine drives from drive group dg_16. The configured pool receives ID pool_5.

    Note:  Before using the stor/config/pool create command, use the /stor/config/profile show command to display the traditional pool profiles and the /stor/config/dg show command to display the drive groups.
    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool create -name MyPool -descr "my big pool" -storProfile profile_19,profile_20 -diskGroup dg_15,dg_16 -drivesNumber 5,9 -FASTCacheEnabled yes
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    ID = pool_5
    Operation completed successfully.
                            
    Example 4 (virtual deployments only)

    The following command creates a traditional pool with two virtual disks, vdisk_0 and vdisk_2 in the Extreme Performance tier. The configured pool receives ID pool_4.

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool create -name vPool -descr "my virtual pool" -disk vdisk_0,vdisk_2
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    ID = pool_4
    Operation completed successfully.
                            

    Change pool settings

    Change the subscription alert threshold, FAST Cache, and snapshot threshold settings for a pool.

    Format
    /stor/config/pool {-id <value> | -name <value>} set [-async] –name <value> [-descr <value>] [-alertThreshold <value>] [-snapPoolFullThresholdEnabled {yes|no}] [-snapPoolFullHWM <value>] [-snapPoolFullLWM <value>] [-snapSpaceUsedThresholdEnabled {yes|no}] [-snapSpaceUsedHWM <value>] [-snapSpaceUsedLWM <value>] [-snapAutoDeletePaused no]
    Object qualifiers
    Qualifier
    Description
    -id
    Type the ID of the pool to change.
    -name
    Type the name of the pool to change.
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    Note:  Simultaneous commands, asynchronous or synchronous, may fail if they conflict in trying to manage the same system elements.
    -name
    Type a name for the pool.
    -descr
    Type a brief description of the pool.
    -alertThreshold
    For thin provisioning, specify the threshold, as a percentage, when the system will alert on the amount of subscription space used. When hosts consume the specified percentage of subscription space, the system sends an alert. Value range is 50% to 84%.
    -FASTCacheEnabled (physical deployments only)
    Specify whether to enable FAST Cache on the pool. Value is one of the following:
    • yes
    • no
    -snapPoolFullThresholdEnabled
    Indicate whether the system should check the pool full high water mark for auto-delete. Value is one of the following:
    • yes
    • no
    -snapPoolFullHWM
    Specify the pool full high watermark for the pool. Valid values are 1-99. Default value is 95.
    -snapPoolFullLWM
    Specify the pool full low watermark for the pool. Valid values are 0-98. Default value is 85.
    -snapSpaceUsedThresholdEnabled
    Indicate whether the system should check the snapshot space used high water mark for auto-delete. Value is one of the following:
    • yes
    • no
    -snapSpaceUsedHWM
    Specify the snapshot space used high watermark to trigger auto-delete on the pool. Valid values are 1-99. Default value is 95.
    -snapSpaceUsedLWM
    Specify the snapshot space used low watermark to trigger auto-delete on the pool. Valid values are 0-98. Default value is 20.
    -snapAutoDeletePaused
    Specify whether to pause snapshot auto-delete. Typing no resumes the auto-delete operation.
    Example

    The following command sets the subscription alert threshold for pool pool_1 to 70%:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool -id pool_1 -set -alertThreshold 70 -FASTCacheEnabled no
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    ID = pool_1
    Operation completed successfully.
                            

    Add drives to pools

    Add new drives to a pool to increase its storage capacity.

    Format
    /stor/config/pool {-id <value> | -name <value>} extend [-async] {-diskGroup <value> -drivesNumber <value> [-storProfile <value>] |-disk <value>}
    Object qualifiers
    Qualifier
    Description
    -id
    Type the ID of the pool to extend.
    -name
    Type the name of the pool to extend.
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -diskGroup (physical deployments only)
    Type the IDs of the drive groups, separated by commas, to add to the pool.
    -drivesNumber (physical deployments only)
    Type the number of drives from the specified drive groups, separated by commas, to add to the pool. If this option is specified when -storProfile is not specified, the operation may fail when the -drivesNumber value does not match the default RAID configuration for each drive type in the selected drive group.
    -storProfile (physical deployments only)
    Type the IDs of the storage profiles, separated by commas, to apply to the pool. If this option is not specified, a default RAID configuration is selected for each particular drive type in the selected drive group: NL-SAS (RAID 6 with a stripe length of 8), SAS (RAID 5 with a stripe length of 5), or Flash (RAID 5 with a stripe length of 5).
    -disk (virtual deployments only)
    Specify the list of drives, separated by commas, to add to the pool. Specified drives must be reliable storage objects that do not require additional protection.
    Example 1 (physical deployments only)

    The following command extends pool pool_1 with seven drives from drive group DG_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool –id pool_1 extend –diskGroup dg_1 –drivesNumber 7 -storProfile profile_12
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    ID = pool_1
    Operation completed successfully.
                            
    Example 2 (virtual deployments only)

    The following command extends pool pool_1 by adding two virtual disks, vdisk_1 and vdisk_5.

    uemcli -d 10.0.0.2 -u Local/joe -p MyPassword456! /stor/config/pool –id pool_1 extend –disk vdisk_1,vdisk_5
                              Storage system address: 10.0.0.2
    Storage system port: 443
    HTTPS connection
    
    ID = pool_1
    Operation completed successfully.
    
                            

    View pools

    View a list of pools. You can filter on the pool ID.

    Note:   The show action command explains how to change the output format.
    Format
    /stor/config/pool {-id <value> | -name <value>}] show
    Object qualifiers
    Qualifier
    Description
    -id
    Type the ID of a pool.
    -name
    Type the name of a pool.
    Example 1 (physical deployments only)

    The following command shows details about all pools on a hybrid system:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool show -detail
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    
     1:     ID                                               = pool_1
           Name                                              = Performance
           Description                                       = Multi-tier pool
           Total space                                       = 8663754342400 (7.8T)
           Current allocation                                = 0
           Preallocated space                                = 38310387712 (35.6G)
           Remaining space                                   = 8663754342400 (7.8T)
           Subscription                                      = 0
           Subscription percent                              = 0%
           Alert threshold                                   = 70%
           Drives                                            = 5 x 600.0G SAS; 5 x 1.6T SAS Flash 3
           Number of drives                                  = 10
           RAID level                                        = 5
           Stripe length                                     = 5
           Rebalancing                                       = no
           Rebalancing progress                              =
           Health state                                      = OK (5)
           Health details                                    = "The component is operating normally. No action is required."
           FAST Cache enabled                                = no
           Protection size used                              = 0
           Non-base size used                                = 0
           Auto-delete state                                 = Idle
           Auto-delete paused                                = no
           Auto-delete pool full threshold enabled           = yes
           Auto-delete pool full high water mark             = 95%
           Auto-delete pool full low water mark              = 85%
           Auto-delete snapshot space used threshold enabled = no
           Auto-delete snapshot space used high water mark   = 25%
           Auto-delete snapshot space used low water mark    = 20%
           Compression space saved                           = 0
           Compression Percent                               = 0%
           Compression Ratio                                 = 1:1
           Data Reduction space saved                        = 0
           Data Reduction percent                            = 0%
           Data Reduction ratio                              = 1:1
           All flash pool                                    = no
    
    2:     ID                                                = pool_2
           Name                                              = Capacity
           Description                                       =
           Total space                                       = 4947802324992 (4.5T)
           Current allocation                                = 3298534883328 (3T)
           Preallocated space                                = 22194823168 (20.6G)
           Remaining space                                   = 4947802324992 (1.5T)
           Subscription                                      = 10995116277760 (10T)
           Subscription percent                              = 222%
           Alert threshold                                   = 70%
           Drives                                            = 12 x 2TB NL-SAS
           Number of drives                                  = 12
           Unused drives                                     = 7
           RAID level                                        = 6
           Stripe length                                     = 6       
           Rebalancing                                       = yes
           Rebalancing progress                              = 46%
           Health state                                      = OK (5)
           Health details                                    = "The component is operating normally. No action is required."
           FAST Cache enabled                                = yes
           Protection size used                              = 10995116238 (10G)
           Non-base size used                                = 10995116238 (10G)
           Auto-delete state                                 = Running
           Auto-delete paused                                = no
           Auto-delete pool full threshold enabled           = yes
           Auto-delete pool full high water mark             = 95%
           Auto-delete pool full low water mark              = 85%
           Auto-delete snapshot space used threshold enabled = yes
           Auto-delete snapshot space used high water mark   = 25%
           Auto-delete snapshot space used low water mark    = 20%
           Compression space saved                           = 4947802324992 (1.5T)
           Compression percent                               = 23%
           Compression ratio                                 = 1.3:1
           Data Reduction space saved                        = 4947802324992 (1.5T)
           Data Reduction percent                            = 23%
           Data Reduction ratio                              = 1.3:1
           All flash pool                                    = no
    
     3:    ID                                                = pool_3
           Name                                              = Extreme Performance
           Description                                       =
           Total space                                       = 14177955479552 (12.8T)
           Current allocation                                = 0
           Preallocated space                                = 14177955479552 (12.8T)
           Remaining space                                   = 14177955479552 (12.8T)
           Subscription                                      = 0
           Subscription percent                              = 0%
           Alert threshold                                   = 70%
           Drives                                            = 9 x 1.6T SAS Flash 3; 5 x 400.0G SAS Flash 2
           Number of drives                                  = 14
           RAID level                                        = 5
           Stripe length                                     = Mixed
           Rebalancing                                       = no
           Rebalancing progress                              =
           Health state                                      = OK (5)
           Health details                                    = "The component is operating normally. No action is required."
           FAST Cache enabled                                = no
           Protection size used                              = 0
           Non-base size used                                = 0
           Auto-delete state                                 = Idle
           Auto-delete paused                                = no
           Auto-delete pool full threshold enabled           = yes
           Auto-delete pool full high water mark             = 95%
           Auto-delete pool full low water mark              = 85%
           Auto-delete snapshot space used threshold enabled = no
           Auto-delete snapshot space used high water mark   = 25%
           Auto-delete snapshot space used low water mark    = 20%
           Compression space saved                           = 0
           Compression Percent                               = 0%
           Compression Ratio                                 = 1:1
           Data Reduction space saved                        = 0
           Data Reduction percent                            = 0%
           Data Reduction ratio                              = 1:1
           All flash pool                                    = yes
                            
    Example 2

    The following example shows all pools for a model that supports dynamic pools.

    uemcli -d 10.0.0.2 -u Local/joe -p MyPassword456! /stor/config/pool show -detail
                              [Response]
    Storage system address: 10.64.75.201
    Storage system port: 443
    HTTPS connection
    1:    ID                                                = pool_3
          Type                                              = Traditional
          Name                                              = MyPool
          Description                                       = traditional pool
          Total space                                       = 14177955479552 (12.8T)
          Current allocation                                = 0
          Preallocated space                                = 38310387712 (35.6G)
          Remaining space                                   = 14177955479552 (12.8T)
          Subscription                                      = 0
          Subscription percent                              = 0%
          Alert threshold                                   = 70%
          Drives                                            = 9 x 1.6T SAS Flash 3; 5 x 400.0G SAS Flash 2
          Number of drives                                  = 14
          RAID level                                        = 5
          Stripe length                                     = Mixed
          Rebalancing                                       = no
          Rebalancing progress                              =
          Health state                                      = OK (5)
          Health details                                    = "The component is operating normally. No action is required."
          FAST Cache enabled                                = no
          Protection size used                              = 0
          Non-base size used                                = 0
          Auto-delete state                                 = Idle
          Auto-delete paused                                = no
          Auto-delete pool full threshold enabled           = yes
          Auto-delete pool full high water mark             = 95%
          Auto-delete pool full low water mark              = 85%
          Auto-delete snapshot space used threshold enabled = no
          Auto-delete snapshot space used high water mark   = 25%
          Auto-delete snapshot space used low water mark    = 20%
          Compression space saved                           = 0
          Compression Percent                               = 0%
          Compression Ratio                                 = 1:1
          Data Reduction space saved                        = 0
          Data Reduction percent                            = 0%
          Data Reduction ratio                              = 1:1
          All flash pool                                    = yes
    
    2:    ID                                                = pool_4
          Type                                              = Dynamic
          Name                                              = dynamicPool
          Description                                       =
          Total space                                       = 1544309178368 (1.4T)
          Current allocation                                = 0
          Preallocated space                                = 38310387712 (35.6G)
          Remaining space                                   = 1544309178368 (1.4T)
          Subscription                                      = 0
          Subscription percent                              = 0%
          Alert threshold                                   = 70%
          Drives                                            = 6 x 400.0G SAS Flash 2
          Number of drives                                  = 6
          RAID level                                        = 5
          Stripe length                                     = 5
          Rebalancing                                       = no
          Rebalancing progress                              =
          Health state                                      = OK (5)
          Health details                                    = "The component is operating normally. No action is required."
          Protection size used                              = 0
          Non-base size used                                = 0
          Auto-delete state                                 = Idle
          Auto-delete paused                                = no
          Auto-delete pool full threshold enabled           = yes
          Auto-delete pool full high water mark             = 95%
          Auto-delete pool full low water mark              = 85%
          Auto-delete snapshot space used threshold enabled = no
          Auto-delete snapshot space used high water mark   = 25%
          Auto-delete snapshot space used low water mark    = 20%
          Compression space saved                           = 0
          Compression Percent                               = 0%
          Compression Ratio                                 = 1:1
          Data Reduction space saved                        = 0
          Data Reduction percent                            = 0%
          Data Reduction ratio                              = 1:1
          All flash pool                                    = yes
                            
    Example 3 (virtual deployments only)

    The following command shows details for all pools on a virtual system.

    uemcli -d 10.0.0.2 -u Local/joe -p MyPassword456! /stor/config/pool show -detail
                              Storage system address: 10.0.0.2
    Storage system port: 443
    HTTPS connection
    
    1:     ID                                                = pool_1
           Name                                              = Capacity
           Description                                       =
           Total space                                       = 4947802324992 (4.5T)
           Current allocation                                = 3298534883328 (3T)
           Preallocated space                                = 38310387712 (35.6G)
           Remaining space                                   = 4947802324992 (1.5T)
           Subscription                                      = 10995116277760 (10T)
           Subscription percent                              = 222%
           Alert threshold                                   = 70%
           Drives                                            = 1 x 120GB Virtual; 1 x 300GB Virtual
           Number of drives                                  = 2
           Health state                                      = OK (5)
           Health details                                    = "The component is operating normally.  No action is required."
           Non-base size used                                = 1099511625 (1G)
           Auto-delete state                                 = Running
           Auto-delete paused                                = no
           Auto-delete pool full threshold enabled           = yes
           Auto-delete pool full high water mark             = 95%
           Auto-delete pool full low water mark              = 85%
           Auto-delete snapshot space used threshold enabled = yes
           Auto-delete snapshot space used high water mark   = 25%
           Auto-delete snapshot space used low water mark    = 20%
                            

    Delete pools

    Delete a pool.

    Format
    /stor/config/pool {-id <value> | -name <value>} delete [-async]
    Object qualifiers
    Qualifier
    Description
    -id
    Type the ID of the pool to delete.
    -name
    Type the name of the pool to delete.
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    Note:  Simultaneous commands, asynchronous or synchronous, may fail if they conflict in trying to manage the same system elements.
    Example

    The following deletes pool pool_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool –id pool_1 delete
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
                            

    Manage FAST VP pool settings

    Fully Automated Storage Tiering for Virtual Pools (FAST VP) is a storage efficiency technology that automatically moves data between storage tiers within a pool based on data access patterns.

    The following table lists the attributes for FAST VP pool settings.

    Table 2. FAST VP pool attributes
    Attribute
    Description
    Pool
    Identifies the pool.
    Status
    Identifies the status of data relocation on the pool. Value is one of the following:
    • Not started - Data relocation has not started.
    • Paused - Data relocation is paused.
    • Completed - Data relocation is complete.
    • Stopped by user - Data relocation was stopped by the user.
    • Active - Data relocation is in progress.
    • Failed - Data relocation failed.
    Relocation type
    Type of data relocation. Value is one of the following:
    • Manual - Data relocation was initiated by the user.
    • Scheduled or rebalancing - Data relocation was initiated by the system because it was scheduled, or because the system rebalanced the data.
    Schedule enabled
    Identifies whether the pool is rebalanced according to the system FAST VP schedule. Value is one of the following:
    • yes
    • no
    Start time
    Indicates the time the current data relocation started.
    End time
    Indicates the time the current data relocation is scheduled to end.
    Data relocated

    The amount of data relocated during an ongoing relocation, or the previous relocation if a data relocation is not occurring. The format is:

    <value> [suffix]

    where:
    • value - Identifies the size of the data relocated.
    • suffix - Identifies that the value relates to the previous relocation session.
    Rate
    Identifies the transfer rate for the data relocation. Value is one of the following:
    • Low - Least impact on system performance.
    • Medium - Moderate impact on system performance.
    • High - Most impact on system performance.
    Default value is medium.
    Note:  This field is blank if data relocation is not in progress.
    Data to move up
    The amount of data in the pool scheduled to be moved to a higher storage tier.
    Data to move down
    The amount of data in the pool scheduled to be moved to a lower storage tier.
    Data to move within
    The amount of data in the pool scheduled to be moved within the same storage tiers for rebalancing.
    Data to move up per tier

    The amount of data per tier that is scheduled to be moved to a higher tier. The format is:

    <tier_name>:[value]

    where:
    • tier_name - Identifies the storage tier.
    • value - Identifies the amount of data in that tier to be move up.
    Data to move down per tier

    The amount of data per tier that is scheduled to be moved to a lower tier. The format is:

    <tier_name>:[value]

    where:
    • tier_name - Identifies the storage tier.
    • value - Identifies the amount of data in that tier to be moved down.
    Data to move within per tier

    The amount of data per tier that is scheduled to be moved to within the same tier for rebalancing. The format is:

    <tier_name>:[value]

    where:
    • tier_name - Identifies the storage tier.
    • value - Identifies the amount of data in that tier to be rebalanced.
    Estimated relocation time
    Identifies the estimated time required to perform the next data relocation.

    Change FAST VP pool settings

    Modify FAST VP settings on an existing pool.

    Format
    /stor/config/pool/fastvp {-pool <value> | -poolName <value>} set [-async] -schedEnabled {yes | no}
    Object qualifiers
    Qualifier
    Description
    -pool
    Type the ID of the pool.
    -poolName
    Type the name of the pool.
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    Note:  Simultaneous commands, asynchronous or synchronous, may fail if they conflict in trying to manage the same system elements.
    -schedEnabled
    Specify whether the pool is rebalanced according to the system FAST VP schedule. Value is one of the following:
    • yes
    • no
    Example

    The following example enables the rebalancing schedule on pool pool_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/fastvp -pool pool_1 set -schedEnabled yes
                                  Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Pool ID = pool_1
    Operation completed successfully.
                                

    View FAST VP pool settings

    View FAST VP settings on a pool.

    Format
    /stor/config/pool/fastvp [{-pool <value> | -poolName <value>}] show
    Object qualifiers
    Qualifier
    Description
    -pool
    Type the ID of the pool.
    -poolName
    Type the name of the pool.
    Example

    The following command lists the FAST VP settings on the storage system:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/fastvp –show -detail
                                  Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1: Pool                          = pool_1
       Relocation type               = manual
       Status                        = Active
       Schedule enabled              = no
       Start time                    = 2013-09-20 12:55:32
       End time                      = 2013-09-20 21:10:17
       Data relocated                = 100111454324 (100G)
       Rate                          = high
       Data to move up               = 4947802324992 (4.9T)
       Data to move down             = 4947802324992 (4.9T)
       Data to move within           = 4947802324992 (4.9T)
       Data to move up per tier      = Performance: 500182324992 (500G), Capacity:    1000114543245 (1.0T)
       Data to move down per tier    = Extreme Performance: 1000114543245 (1.0T),    Performance: 500182324992 (500G)
       Data to move within per tier  = Extreme Performance: 500182324992 (500G),    Performance: 500182324992 (500G), Capacity: 500182324992 (500G)
       Estimated relocation time     = 7h 30m
                                

    Start data relocation

    Start data relocation on a pool.

    Format
    /stor/config/pool/fastvp {-pool <value> | -poolName <value>} start [-async] [-rate {low | medium | high}] [-endTime <value>]
    Object qualifiers
    Qualifier
    Description
    -pool
    Type the ID of the pool to resume data relocation.
    -poolName
    Type the name of the pool to resume data relocation.
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    Note:  Simultaneous commands, asynchronous or synchronous, may fail if they conflict in trying to manage the same system elements.
    -pool
    Type the ID of the pool.
    -endTime

    Specify the time to stop the data relocation. The format is:

    [HH:MM]

    where:
    • HH — Hour.
    • MM — Minute.
    Default value is eight hours from the current time.
    -rate
    Specify the transfer rate for the data relocation. Value is one of the following:
    • Low — Least impact on system performance.
    • Medium — Moderate impact on system performance.
    • High — Most impact on system performance.
    Default value is the value set at the system level.
    Example

    The following command starts data relocation on pool pool_1, and directs it to end at 04:00:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/fastvp -pool pool_1 start -endTime 04:00
                                  Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
                                

    Stop data relocation

    Stop data relocation on a pool.

    Format
    /stor/config/pool/fastvp {-pool <value> | -poolName <value>} stop [-async]
    Object qualifiers
    Qualifier
    Description
    -pool
    Type the ID of the pool.
    -poolName
    Type the name of the pool.
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    Example

    The following command stops data relocation on pool pool_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/fastvp –pool pool_1 stop
                                  Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
                                

    Manage pool tiers

    Storage tiers allow users to move data between different types of drives in a pool to maximize storage efficiency. Storage tiers are defined by the following characteristics:

    • Drive performance.
    • Drive capacity.

    The following table lists the attributes for storage profiles:

    Table 3. Storage tier attributes
    Attribute
    Description
    Name
    Storage tier name.
    Drives
    The list of drive types, and the number of drives of each type in the storage tier.
    RAID level (physical deployments only)
    RAID level of the storage tier.
    Stripe length (physical deployments only)
    Comma-separated list of the stripe length of the drives in the storage tier.
    Total space
    Total capacity in the storage tier.
    Current allocation
    Currently allocated space.
    Remaining space
    Remaining space.

    View storage tiers

    View a list of storage tiers. You can filter on the pool ID.

    Note:   The show action command explains how to change the output format.
    Format
    /stor/config/pool/tier {-pool <value> | -poolName <value>} show
    Object qualifiers
    Qualifier
    Description
    -pool
    Type the ID of a pool.
    -poolName
    Type the name of a pool.
    Example 1 (physical deployments only)

    The following command shows tier details about the specified pool:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/tier -pool pool_1 show -detail
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1:    Name                = Extreme Performance
          Drives              = 2 x 200.0G SAS Flash 2; 2 x 800.0G SAS Flash 2
          Drive type          = SAS Flash
          RAID level          = 10
          Stripe length       = 2
          Total space         = 868120264704 (808.5G)
          Current allocation  = 56371445760 (52.5G)
          Remaining space     = 811748818944 (756.0G)
    
    2:    Name                = Performance
          Drives              = 15 x 600.0G SAS
          Drive type          = SAS
          RAID level          = 5
          Stripe length       = 5
          Total space         = 7087501344768 (6.4T)
          Current allocation  = 0
          Remaining space     = 7087501344768 (6.4T)
    
    3:    Name                = Capacity
          Drives              = 8 x 6.0T NL-SAS
          Drive type          = NL-SAS
          RAID level          = 6
          Stripe length       = 8
          Total space         = 35447707271168 (32.2T)
          Current allocation  = 1610612736 (1.5G)
          Remaining space     = 35446096658432 (32.2T)
                            
    Example 2 (virtual deployments only)

    The following command shows details about pool pool_1 on a virtual system.

    uemcli -d 10.0.0.2 -u Local/joe -p MyPassword456! /stor/config/pool/tier –pool pool_1 show -detail
                              Storage system address: 10.0.0.2
    Storage system port: 443
    HTTPS connection
    
    1:    Name                = Extreme Performance
          Drives              =
          Total space         = 0
          Current allocation  = 0 
          Remaining space     = 0
    
    
    2:    Name                = Performance
          Drives              = 1 x 500GB Virtual
          Total space         = 631242752000 (500.0G)
          Current allocation  = 12624855040 (10.0G)
          Remaining space     = 618617896960 (490.0G)
    
    
    3:    Name                = Capacity
          Drives              =
          Total space         = 0
          Current allocation  = 0 
          Remaining space     = 0
    
                            

    View pool resources

    This command displays a list of storage resources allocated in a pool. This can be storage resources provisioned on the specified pool and NAS servers that have file systems allocated in the pool.

    The following table lists the attributes for pool resources.

    Table 4. Pool resources
    Attribute
    Description
    ID
    Storage resource identifier.
    Name
    Name of the storage resource.
    Resource type
    Type of the resource. Valid values are:
    • LUN
    • File system
    • LUN group
    • VMware NFS
    • VMware VMFS
    • NAS server
    Pool
    Name of the pool.
    Total pool space used
    Total space in the pool used by a storage resource. This includes primary data used size, snapshot used size, and metadata size. Space in the pool can be freed if snapshots and thin clones for storage resources are deleted, or have expired.
    Total pool space preallocated
    Total space reserved from the pool by the storage resource for future needs to make writes more efficient. The pool may be able to reclaim some of this if space is running low. Additional pool space can be freed if snapshots or thin clones are deleted or expire, and also if Data Reduction is applied.
    Total pool non-base space used
    Total pool space used by snapshots and thin clones.
    Health state
    Health state of the file system. The health state code appears in parentheses.
    Health details
    Additional health information. See Appendix A, Reference, for health information details.
    Format
    /stor/config/pool/sr [{-pool <value> | -poolName <value>}] show
    Object qualifiers
    Qualifier
    Description
    -pool
    Type the ID of the pool.
    -poolName
    Type the name of the pool.
    Example

    The following command shows details for all storage resources associated with the pool pool_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/sr -pool pool_1 show -detail
                          Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1:       ID                             = res_1
             Name                           = File_System_1
             Resource type                  = File System
             Pool                           = pool_1
             Total pool space used          = 53024473088 (49.3G)
             Total pool preallocated        = 15695003648 (14.6G)
             Total pool snapshot space used = 7179124736 (6.6G)
             Total pool non-base space used = 7179124736 (6.6G)
             Health state                   = OK (5)
             Health details                 = "The component is operating normally. No action is required."
    
    2:       ID                             = sv_1
             Name                           = AF LUN 1
             Resource type                  = LUN
             Pool                           = pool_1
             Total pool space used          = 14448566272 (13.4G)
             Total pool preallocated        = 4610351104 (4.2G)
             Total pool snapshot space used = 4593991680 (4.2G)
             Total pool non-base space used = 4593991680 (4.2G)
             Health state                   = OK (5)
             Health details                 = "The LUN is operating normally. No action is required."
    
    3:       ID                             = res_2
             Name                           = File_System_2
             Resource type                  = File System
             Pool                           = pool_1
             Total pool space used          = 117361025024 (109.3G)
             Total pool preallocated        = 3166494720 (2.9G)
             Total pool snapshot space used = 41022308352 (38.2G)
             Total pool non-base space used = 41022308352 (38.2G)
             Health state                   = OK (5)
             Health details                 = "The component is operating normally. No action is required."
    
    4:      ID                              = sv_2
            Name                            = AF LUN 2
            Resource type                   = LUN
            Pool                            = pool_1
            Total pool space used           = 9500246016 (8.8G)
            Total pool preallocated         = 2579349504 (2.4G)
            Total pool snapshot space used  = 0
            Total pool non-base space used  = 0
            Health state                    = OK (5)
            Health details                  = "The LUN is operating normally. No action is required."
    
    5:      ID                              = res_3
            Name                            = CG1
            Resource type                   = LUN group
            Pool                            = pool_1
            Total pool space used           = 892542287872 (831.2G)
            Total pool preallocated         = 8863973376 (8.2G)
            Total pool snapshot space used  = 231799308288 (215.8G)
            Total pool non-base space used  = 231799308288 (215.8G)
            Health state                    = OK (5)
            Health details                  = "The component is operating normally. No action is required."
    
                        

    Manage FAST VP general settings

    Fully Automated Storage Tiering for Virtual Pools (FAST VP) is a storage efficiency technology that automatically moves data between storage tiers within a pool based on data access patterns.

    The following table lists the attributes for FAST VP general settings.

    Table 5. FAST VP general attributes
    Attribute
    Description
    Paused
    Identifies whether the data relocation is paused. Value is one of the following:
    • yes
    • no
    Schedule-enabled
    Identifies whether the pool is rebalanced according to the system FAST VP schedule. Value is one of the following:
    • yes
    • no
    Frequency
    Data relocation schedule. The format is:

    Every <days_of_the_week> at <start_time> until <end_time>

    where:
    • <days_of_the_week> - List of the days of the week that data relocation will run.
    • <start_time> - Time the data relocation starts.
    • <end_time> - Time the data relocation finishes.
    Rate
    Identifies the transfer rate for the data relocation. Value is one of the following:
    • Low - Least impact on system performance.
    • Medium - Moderate impact on system performance.
    • High - Most impact on system performance.
    Default value is medium.
    Note:  This field is blank if data relocation is not in progress.
    Data to move up
    The amount of data in the pool scheduled to be moved to a higher storage tier.
    Data to move down
    The amount of data in the pool scheduled to be moved to a lower storage tier.
    Data to move within
    The amount of data in the pool scheduled to be moved within the same storage tiers for rebalancing.
    Estimated scheduled relocation time
    Identifies the estimated time required to perform the next data relocation.

    Change FAST VP general settings

    Change FAST VP general settings.

    Format
    /stor/config/fastvp set [-async] [-schedEnabled {yes | no}] [-days <value>] [-at <value>] [-until <value>] [-rate {low | medium | high}] [-paused {yes | no}]
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -paused
    Specify whether to pause data relocation on the storage system. Valid values are:
    • yes
    • no
    -schedEnabled
    Specify whether the pool is rebalanced according to the system FAST VP schedule. Valid values are:
    • yes
    • no
    -days
    Specify a comma-separated list of the days of the week to schedule data relocation. Valid values are:
    • mon – Monday
    • tue – Tuesday
    • wed – Wednesday
    • thu – Thursday
    • fri – Friday
    • sat – Saturday
    • sun – Sunday
    -at

    Specify the time to start the data relocation. The format is:

    [HH:MM]

    where:
    • HH – Hour
    • MM – Minute
    Valid values are between 00:00 and 23:59. Default value is 00:00.
    -until

    Specify the time to stop the data relocation. The format is:

    [HH:MM]

    where:
    • HH – Hour
    • MM – Minute
    Valid values are between 00:00 and 23:59. Default value is eight hours after the time specified with the -at parameter.
    -rate
    Specify the transfer rate for the data relocation. Value is one of the following:
    • low – Least impact on system performance.
    • medium – Moderate impact on system performance.
    • high – Most impact on system performance.
    Default value is medium.
    Example

    The following command changes the data relocation schedule to run on Mondays and Fridays from 23:00 to 07:00:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastvp set -schedEnabled yes -days "Mon,Fri" -at 23:00 -until 07:00
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
                            

    View FAST VP general settings

    View the FAST VP general settings.

    Format
    /stor/config/fastvp show -detail
    Example

    The following command displays the FAST VP general settings:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastvp show -detail
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1: Paused                              = no
       Schedule enabled                    = yes
       Frequency                           = Every Mon, Fri at 22:30 until 8:00
       Rate                                = high
       Data to move up                     = 4947802324992 (1.5T)
       Data to move down                   = 4947802324992 (1.5T)
       Data to move within                 = 4947802324992 (1.5T)
       Estimated scheduled relocation time = 7h 30m
                            

    Manage FAST Cache (supported physical deployments only)

    FAST Cache is a storage efficiency technology that uses drives to expand the cache capability of the storage system to provide improved performance.

    The following table lists the attributes for FAST Cache:

    Table 6. FAST Cache attributes
    Attribute
    Description
    Capacity
    Capacity of the FAST Cache.
    Drives
    The list of drive types, and the number of drives of each type in the FAST Cache.
    Number of drives
    Total number of drives in the FAST Cache.
    RAID level
    RAID level applied to the FAST Cache drives. This value is always RAID 1.
    Health state
    Health state of the FAST Cache. The health state code appears in parentheses.
    Health details
    Additional health information. See Appendix A, Reference, for health information details.

    Create FAST Cache

    Configure FAST Cache. The storage system generates an error if FAST Cache is already configured.

    Format
    /stor/config/fastcache create [-async] -diskGroup <value> -drivesNumber <value> [-enableOnExistingPools {yes | no}]
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -diskGroup
    Specify the drive group to include in the FAST Cache.
    Note:  Only SAS Flash 2 drives can be used in the FAST Cache.
    -drivesNumber
    Specify the number of drives to include in the FAST Cache.
    -enableOnExistingPools
    Specify whether FAST Cache is enabled on all existing pools. Valid values are:
    • yes
    • no
    Example

    The following command configures FAST Cache with six drives from drive group dg_2, and enables FAST Cache on existing pools:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastcache create -diskGroup dg_2 -drivesNumber 6 -enableOnExistingPools yes
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
                            

    View FAST Cache settings

    View the FAST Cache parameters.

    Format
    /stor/config/fastcache show
    Example

    The following command displays the FAST Cache parameters for a medium endurance Flash drive:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastcache show -detail
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1:     Total space           = 536870912000 (500G)
           Drives                = 6 x 200GB SAS Flash 2
           Number of drives      = 6
           RAID level            = 1
           Health state          = OK (5)
           Health details        = "The component is operating normally.  No action is required."
                            

    Extend FAST Cache

    Extend the FAST Cache by adding more drives.

    Format
    /stor/config/fastcache extend [-async] -diskGroup <value> -drivesNumber <value>
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -diskGroup
    Specify the comma-separated list of SAS Flash drives to add to the FAST Cache. Any added drives must have the same drive type and drive size as the existing drives.
    -drivesNumber
    Specify the number of drives for each corresponding drive group to be added to the FAST Cache.
    Example

    The following command adds six drives from drive group "dg_2" to FAST cache.

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastcache extend -diskGroup dg_2 -drivesNumber 6
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
                            

    Shrink FAST Cache

    Shrink the FAST Cache by removing storage objects.

    Format
    /stor/config/fastcache shrink [-async] -so <value>
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -so
    Specify the comma-separated list of storage objects to remove from the FAST Cache. Run the /stor/config/fastcache/so show command to obtain a list of all storage objects currently in the FAST Cache.
    Example

    The following command removes Raid Group RG_1 from the FAST Cache.

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastcache shrink –so rg_1
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
    
                            

    Delete FAST Cache

    Delete the FAST Cache configuration. The storage system generates an error if FAST Cache is not configured on the system.

    Format
    /stor/config/fastcache delete [-async]
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    Example

    The following command deletes the FAST Cache configuration:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastcache delete
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
                            

    Manage FAST Cache storage objects (physical deployments only)

    FAST Cache storage objects include the RAID groups and drives that are in the FAST Cache.

    Table 7. FAST Cache storage object attributes
    Attribute
    Description
    ID
    Identifier of the storage object.
    Type
    Type of storage object.
    RAID level
    RAID level applied to the storage object.
    Drive type
    Type of drive.
    Number of drives
    Number of drives in the storage object.
    Drives
    Comma-separated list of the drive IDs for each storage object.
    Total space
    Total space used by the storage object.
    Device state
    The status of the FAST Cache device. Values are:
    • OK - This cache device is operating normally.
    • Degraded - One drive of this cache device is faulted.
    • Faulted - This cache device cannot operate normally.
    • Expanding - This cache device is expanding.
    • Expansion Ready - This cache device finished expanding.
    • Expansion Failure - This cache device failed to expand.
    • Shrinking - This cache device is shrinking.
    • Shrink Done - This cache device has flushed pages and is removed from FAST Cache.

    View FAST Cache storage objects

    View a list of all storage objects, including RAID groups and drives, that are in the FAST Cache.

    Format
    /stor/config/fastcache/so [-id <value> ] show
    Object qualifier
    Qualifier
    Description
    -id
    Type the ID of the storage object in the FAST Cache.
    Example 1

    The following example shows FAST Cache storage objects on the system.

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastcache/so show
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1:    ID                  = rg_6
          Type                = RAID group 
          Stripe length       = 2
          RAID level          = 1
          Number of drives    = 2
          Drive type          = SAS Flash 2
          Drives              = dae_0_1_disk_1, dae_0_1_disk_2 
          Total space         = 195400433664 (181.9G)
          Device state        = OK
                            

    View storage profiles (physical deployments only)

    Storage profiles are preconfigured settings for configuring pools based on the following:

    • Types of storage resources that will use the pools.
    • Intended usage of the pool.

    For example, create a pool for file system storage resources intended for general use. When configuring a pool, specify the ID of the storage profile to apply to the pool.

    Note:  Storage profiles are not restrictive with regard to storage provisioning. For example, you can provision file systems from an FC or iSCSI database pool. However, the characteristics of the storage will be best suited to the indicated storage resource type and use.

    Each storage profile is identified by an ID.

    The following table lists the attributes for storage profiles.

    Table 8. Storage profile attributes
    Attribute
    Description
    ID
    ID of the storage profile.
    Type
    (Available only for systems that support dynamic pools) Type of pool the profile can create. Value is one of the following:
    • Dynamic
    • Traditional
    Description
    Brief description of the storage profile.
    Drive type
    Types of drives for the storage profile.
    RAID level
    RAID level number for the storage profile. Value is one of the following:
    • 1 - RAID level 1.
    • 5 - RAID level 5.
    • 6 - RAID level 6.
    • 10 - RAID level 1+0.
    Maximum capacity
    Maximum storage capacity for the storage profile.
    Stripe length
    Number of drives the data is striped across.
    Note:  For best fit profiles, this value is Best fit .
    Disk group
    List of drive groups recommended for the storage pool configurations of the specified storage profile. This is calculated only when the -configurable option is specified.
    Maximum drives to configure
    List of the maximum number of drives allowed for the specified storage profile in the recommended drive groups. This is calculated only when the -configurable option is specified.
    Maximum capacity to configure
    List of the maximum number of free capacity of the drives available to configure for the storage profile in the recommended drive groups. This is calculated only when the -configurable option is specified.
    Note:   The show action command explains how to change the output format.
    Format
    /stor/config/profile [-id <value> | -driveType <value> [-raidLevel <value>] | -traditional] [-configurable] show
    Object qualifier
    Qualifier
    Description
    -id
    Type the ID of a storage profile.
    -driveType
    Specify the type of drive.
    -raidLevel
    Specify the RAID type of the profile.
    -traditional
    (Available only for systems that support dynamic pools) Specify this option to view the profiles that you can use for creating traditional pools. To view the profiles you can use for creating dynamic pools, omit this option.
    -configurable
    Show only profiles that can be configured, that is, those with non-empty drive group information. If specified, calculates the following drive group information for each profile:
    • Disk group
    • Maximum drives to configure
    • Maximum capacity to configure

    If the profile is for a dynamic pool, the calculated information indicates whether the drive group has enough drives for pool creation. The calculation assumes that the pool will be created with the drives in the specified drive group only.

    Example 1

    The following command shows details for storage profiles that can be used to create dynamic pools:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/profile -configurable show
                          Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1:    ID                            = profile_22
          Type                          = Dynamic
          Description                   = SAS Flash 2 RAID5 (4+1)
          Drive type                    = SAS Flash 2
          RAID level                    = 5
          Maximum capacity              = 4611148087296 (4.1T)
          Stripe length                 = Maximum capacity
          Disk group                    = 
          Maximum drives to configure   = 
          Maximum capacity to configure = 
    
    2:    ID                            = profile_30
          Type                          = Dynamic
          Description                   = SAS Flash 2 RAID10 (1+1)
          Drive type                    = SAS Flash 2
          RAID level                    = 10
          Maximum capacity              = 9749818597376 (8.8T)
          Stripe length                 = 2
          Disk group                    = 
          Maximum drives to configure   = 
          Maximum capacity to configure = 
    
    3:    ID                            = profile_31
          Type                          = Dynamic
          Description                   = SAS Flash 2 RAID10 (2+2)
          Drive type                    = SAS Flash 2
          RAID level                    = 10
          Maximum capacity              = 9749818597376 (8.8T)
          Stripe length                 = 4
          Disk group                    = 
          Maximum drives to configure   = 
          Maximum capacity to configure = 
    
                        
    Example 2

    The following command shows details for storage profiles that can be used to create traditional pools in models that support dynamic pools:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/profile -traditional -configurable show
                          Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1:    ID                            = tprofile_22
          Type                          = Traditional
          Description                   = SAS Flash 3 RAID5 (4+1)
          Drive type                    = SAS Flash 3
          RAID level                    = 5
          Maximum capacity              = 4611148087296 (4.1T)
          Stripe length                 = Maximum capacity
          Disk group                    = dg_16
          Maximum drives to configure   = 5
          Maximum capacity to configure = 1884243623936 (1.7T)
    
    2:    ID                            = tprofile_30
          Type                          = Traditional
          Description                   = SAS Flash 3 RAID10 (1+1)
          Drive type                    = SAS Flash 3
          RAID level                    = 10
          Maximum capacity              = 9749818597376 (8.8T)
          Stripe length                 = 2
          Disk group                    = dg_13, dg_15
          Maximum drives to configure   = 10, 10
          Maximum capacity to configure = 1247522127872 (1.1T), 2954304921600 (2.6T)
    
    3:    ID                            = tprofile_31
          Type                          = Traditional
          Description                   = SAS Flash 3 RAID10 (2+2)
          Drive type                    = SAS Flsh 3
          RAID level                    = 10
          Maximum capacity              = 9749818597376 (8.8T)
          Stripe length                 = 4
          Disk group                    = dg_13, dg_15
          Maximum drives to configure   = 8, 8
          Maximum capacity to configure = 2363443937280 (2.1T), 952103075840 (886.7G)
    
                        

    Manage drive groups (physical deployments only)

    Drive groups are the groups of drives on the system with similar characteristics, including type, capacity, and spindle speed. When configuring pools, you select the drove group to use and the number of drives from the group to add to the pool.

    Each drive group is identified by an ID.

    The following table lists the attributes for drive groups.

    Table 9. Drive group attributes
    Attribute
    Description
    ID
    ID of the drive group.
    Drive type
    Type of drives in the drive group.
    FAST Cache
    Indicates whether the drive group's drives can be added to FAST Cache.
    Drive size
    Capacity of one drive in the drive group.
    Rotational speed
    Rotational speed of the drives in the group.
    Number of drives
    Total number of drives in the drive group.
    Unconfigured drives
    Total number of drives in the drive group that are not in a pool.
    Capacity
    Total capacity of all drives in the drive group.
    Recommended number of spares
    Number of spares recommended for the drive group.
    Drives past EOL
    Number of drives past EOL (End of Life) in the group.
    Drives approaching EOL
    Number of drives that will reach EOL in 0-30 days, 0-60 days, 0-90 days and 0-180 days.

    View drive groups

    View details about drive groups on the system. You can filter on the drive group ID.

    Note:   The show action command explains how to change the output format.
    Format
    /stor/config/dg [-id <value>] [-traditional] show
    Object qualifier
    Qualifier
    Description
    -id
    Type the ID of a drive group.
    -traditional
    (Available only for systems that support dynamic pools) Specify this qualifier to have the system assume that the pools to be created are traditional pools.
    Example 1

    The following command shows details about all drive groups that can be used to configure dynamic pools:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/dg show -detail
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1:     ID                           = dg_3
           Drive type                   = SAS Flash 2
           FAST Cache                   = yes
           Drive size                   = 393846128640 (366.7G)
           Vendor size                  = 400.0G
           Rotational speed             = 0 rpm
           Number of drives             = 3
           Unconfigured drives          = 3
           Capacity                     = 1181538385920 (1.1T)
           Recommended number of spares = 0
           Drives past EOL              = 0
           Drives approaching EOL       = 0 (0-30 days), 0 (0-60 days), 0 (0-90 days), 0 (0-180 days)
    
    2:     ID                           = dg_2
           Drive type                   = SAS Flash 2
           FAST Cache                   = yes
           Drive size                   = 196971960832 (183.4G)
           Vendor size                  = 200.0G
           Rotational speed             = 0 rpm
           Number of drives             = 7
           Unconfigured drives          = 7
           Capacity                     = 1378803725824 (1.2T)
           Recommended number of spares = 0
           Drives past EOL              = 0
           Drives approaching EOL       = 1 (0-30 days), 2 (0-60 days), 2 (0-90 days), 3 (0-180 days)
                            
    Example 2

    The following command shows details about all drive groups that can be used to configure traditional pools in models that support dynamic pools:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/dg -traditional show
                              [Response]
    Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    [Response]
    Storage system address: 10.244.223.141
    Storage system port: 443
    HTTPS connection
    
    1:     ID                           = dg_8
           Drive type                   = NL-SAS
           FAST Cache                   = no
           Drive size                   = 1969623564288 (1.7T)
           Vendor size                  = 2.0T
           Rotational speed             = 7200 rpm
           Number of drives             = 7
           Unconfigured drives          = 7
           Capacity                     = 13787364950016 (12.5T)
           Recommended number of spares = 1
    
    2:     ID                           = dg_15
           Drive type                   = SAS
           FAST Cache                   = no
           Drive size                   = 590894538752 (550.3G)
           Vendor size                  = 600.0G
           Rotational speed             = 15000 rpm
           Number of drives             = 16
           Unconfigured drives          = 4
           Capacity                     = 9454312620032 (8.5T)
           Recommended number of spares = 1
                            

    View recommended drive group configurations

    View the recommended drive groups from which to add drives to a pool based on a specified storage profile or pool type.

    Note:   The show action command explains how to change the output format.
    Format
    /stor/config/dg recom {–profile <value>| -pool <value> | -poolName <value>}
    Action qualifier
    Qualifier
    Description
    -profile
    Type the ID of a storage profile. The output will include the list of drive groups recommended for the specified storage profile.
    -pool
    Type the ID of a pool. The output will include the list of drive groups recommended for the specified pool.
    -poolName
    Type the name of a pool. The output will include the list of drive groups recommended for the specified pool.
    Example

    The following command shows the recommended drive groups for pool pool_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/dg recom -pool pool_1
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1:     ID                        = DG_1
           Drive type                = SAS
           Drive size                = 536870912000 (500GB)
           Number of drives          = 8
           Allowed numbers of drives = 4,8
           Capacity                  = 4398046511104 (4TB)
    
    2:     ID                        = DG_2
           Drive type                = SAS
           Drive size                = 268435456000 (250GB)
           Number of drives          = 4
           Allowed numbers of drives = 4
           Capacity                  = 1099511627776 (1TB)
                            

    Manage storage system capacity settings

    The following table lists the general storage system capacity attributes:

    Table 10. General storage system capacity attributes
    Attributes
    Description
    Free space
    Specifies the amount of space that is free (available to be used) in all storage pools on the storage system.
    Used space
    Specifies the amount of space that is used in all storage pools on the storage system.
    Preallocated space
    Space reserved across all of the pools on the storage system. This space is reserved for future needs of storage resources, which can make writes more efficient. Each pool may be able to reclaim preallocated space from storage resources if the storage resources are not using the space, and the pool space is running low.
    Total space
    Specifies the total amount of space, both free and used, in all storage pools on the storage system.
    Data Reduction space saved
    Specifies the storage size saved on the entire system when using data reduction.
    Note:  Data reduction is available for thin LUNs and thin file systems in an All-Flash pool only. The thin file systems must be created on Unity systems running version 4.2.x or later.
    Data Reduction percent
    Specifies the storage percentage saved on the entire system when using data reduction.
    Note:  Data reduction is available for thin LUNs and thin file systems in an All-Flash pool only. The thin file systems must be created on Unity systems running version 4.2.x or later.
    Data Reduction ratio
    Specifies the ratio between data without data reduction and data after data reduction savings.
    Note:  Data reduction is available for thin LUNs and thin file systems in an All-Flash pool only. The thin file systems must be created on Unity systems running version 4.2.x or later.

    View system capacity settings

    View the current storage system capacity settings.

    Format
    /stor/general/system show
    Example

    The following command displays details about the storage capacity on the system:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/general/system show
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1:      Free space                       = 4947802324992 (1.5T)
            Used space                       = 4947802324992 (1.5T)
            Total space                      = 9895604649984 (3.0T)
            Preallocated space               = 60505210880 (56.3G)
            Compression space saved          = 4947802324992 (1.5T)
            Compression percent              = 50%
            Compression ratio                = 1
            Data Reduction space saved       = 4947802324992 (1.5T)
            Data Reduction percent           = 50%
            Data Reduction ratio             = 1
                            

    Manage system tier capacity settings

    The following table lists the general system tier capacity attributes:

    Table 11. General system tier capacity attributes
    Attributes
    Description
    Name
    Name of the tier. One of the following:
    • Extreme Performance
    • Performance
    • Capacity
    Free space
    Specifies the amount of space that is free (available to be used) in the tier.
    Used space
    Specifies the amount of space that is used in the tier.
    Total space
    Specifies the total amount of space, both free and used, in the tier.

    View system tier capacity

    View the current system tier capacity settings.

    Format
    /stor/general/tier show
    Example

    The following command displays details about the storage tier capacity on the system:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/general/tier show
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1:      Name        = Extreme Performance Tier
            Free space  = 4947802324992 (1.5T)
            Used space  = 4947802324992 (1.5T)
            Total space = 9895604649984 (3.0T)
    
    2:      Name        = Capacity Tier
            Free space  = 4947802324992 (1.5T)
            Used space  = 4947802324992 (1.5T)
            Total space = 9895604649984 (3.0T)
                            

    Manage file systems

    File systems are logical containers on the system that provide file-based storage resources to hosts. You configure file systems on NAS servers, which maintain and manage the file systems. You create network shares on the file system, which connected hosts map or mount to access the file system storage. When creating a file system, you can enable support for the following network shares:

    • SMB shares (previously named CIFS shares), which provide storage access to Windows hosts.
    • Network file system (NFS) shares, which provide storage access to Linux/UNIX hosts.

    An ID identifies each file system.

    The following table lists the attributes for file systems:

    Table 12. File system attributes
    Attribute
    Description
    ID
    ID of the file system.
    Name
    Name of the file system.
    Description
    Description of the file system.
    Health state
    Health state of the file system. The health state code appears in parentheses. Value is one of the following:
    • OK (5)—File system is operating normally.
    • OK_BUT (7)—File system is working, but one or more of the following may have occurred:
      • The storage resource is being initialized or deleted.
      • The file system on this storage resource is running out of space. Allocate more storage space to the storage resource.
    • Degraded/Warning (10)—Working, but one or more of the following may have occurred:
      • One or more of its storage pools are degraded.
      • A replication session for the storage resource is degraded.
      • It has almost reached full capacity. Increase the primary storage size, or create additional file systems to store the data, to avoid data loss. Change file system settings explains how to change the primary storage size.
    • Minor failure (15)—One or both of the following may have occurred:
      • One or more of its storage pools have failed.
      • The associated NAS server has failed.
    • Major failure (20)—One or both of the following may have occurred:
      • One or more of its storage pools have failed.
      • File system is unavailable.
    • Critical failure (25)—One or more of the following may have occurred:
      • One or more of its storage pools are unavailable.
      • File system is unavailable.
      • File system has reached full capacity. Increase the primary storage size, or create additional file systems to store the data, to avoid data loss. Change file system settings explains how to change the primary storage size.
    • Non-recoverable error (30)—One or both of the following may have occurred:
      • One or more of its storage pools are unavailable.
      • File system is unavailable.
    Health details
    Additional health information. See Appendix A, Reference, for health information details.
    File system
    Identifier for the file system. Output of some metrics commands displays only the file system ID. This enables you to easily identify the file system in the output.
    Server
    Name of the NAS server that the file system is mounted on.
    Storage pool ID
    ID of the storage pool the file system is using.
    Storage pool
    Name of the storage pool that the file system uses.
    Format
    Format of the file system. Value is UFS64.
    Protocol
    Protocol used to enable network shares from the file system. Value is one of the following:
    • nfs—Protocol for Linux/UNIX hosts.
    • cifs—Protocol for Windows hosts.
    • multiprotocol—Protocol for UNIX and Windows hosts.
    Access policy
    (Applies to multiprotocol file systems only.) File system access policy option. Value is one of the following:
    • native (default)—When this policy is selected, UNIX mode bits are used for UNIX/Linux clients, and Windows permissions (ACLs) are used for Windows clients.
    • UNIX—When this policy is selected, UNIX mode bits are used to grant access to each file on the file system.
    • Windows—When this policy is selected, permissions that are defined in Windows ACLs are honored for both Windows and UNIX/Linux clients (UNIX mode bits are ignored).
    Folder rename policy
    (Applies to multiprotocol file systems only.) File system folder rename policy option. This policy controls the circumstances under which NFS and SMB clients can rename a directory. Value is one of the following:
    • forbiddenSmb (default)—Only NFS clients can rename directories without any restrictions. An SMB client cannot rename a directory if at least one file is opened in the directory or in one of its subdirectories.
    • allowedAll —All NFS and SMB clients can rename directories without any restrictions.
    • forbiddenAll—NFS and SMB clients cannot rename a directory if at least one file is opened in the directory or in one of its subdirectories.
    Locking policy
    (Applies to multiprotocol file systems only.) File system locking policy option. This policy controls whether NFSv4 range locks must be honored. Value is one of the following:
    • mandatory (default)—Uses the SMB and NFSv4 protocols to manage range locks for a file that is in use by another user. A mandatory locking policy prevents data corruption if there is concurrent access to the same locked data.
    • advisory —In response to lock requests, reports that there is a range lock conflict, but does not prevent the access to the file. This policy allows NFSv2 and NFSv3 applications that are not range-lock-compliant to continue working, but risks data corruption if there are concurrent writes.
    Size
    Quantity of storage reserved for primary data.
    Size used
    Quantity of storage currently used for primary data.
    Maximum size
    Maximum size to which you can increase the primary storage capacity.
    Thin provisioning enabled
    Identifies whether thin provisioning is enabled. Value is yes or no. Default is no. All storage pools support both standard and thin provisioned storage resources. For standard storage resources, the entire requested size is allocated from the pool when the resource is created, for thin provisioned storage resources only incremental portions of the size are allocated based on usage. Because thin provisioned storage resources can subscribe to more storage than is actually allocated to them, storage pools can be over provisioned to support more storage capacity than they actually possess.
    Note:   The Unisphere online help provides more details on thin provisioning.
    Data Reduction enabled
    Identifies whether data reduction is enabled for this file system. Valid values are:
    • yes
    • no (default)
    Note:  Data reduction is available for thin file systems in an All-Flash pool only. The thin file systems must be created on Unity systems running version 4.2.x or later.
    Data Reduction space saved
    Total space saved (in gigabytes) for this file system by using data reduction.
    Note:  Data reduction is available for thin file systems in an All-Flash pool only. The thin file systems must be created on Unity systems running version 4.2.x or later.
    Data Reduction percent
    Total file system storage percentage saved for the file system by using data reduction.
    Note:  Data reduction is available for thin file systems in an All-Flash pool only. The thin file systems must be created on Unity systems running version 4.2.x or later.
    Data Reduction ratio
    Ratio between data without data reduction and data after data reduction savings.
    Note:  Data reduction is available for thin file systems in an All-Flash pool only. The thin file systems must be created on Unity systems running version 4.2.x or later.
    Advanced deduplication enabled
    Identifies whether advanced deduplication is enabled for this file system. This option is available only after data reduction has been enabled. An empty value indicates that advanced deduplication is not supported on the file system. Valid values are:
    • yes
    • no (default)
    Note:  The thin file systems must be created on a Unity system running version 4.2.x or later. Advanced deduplication is available only on:
    • Dynamic or Traditional pools in Unity 380F, 480F, 680F, and 880F systems
    • Dynamic pools in Unity All-Flash 450F, 550F, and 650F systems
    • All-Flash pools in Unity Hybrid 380, 480, 680, and 880 systems
    Current allocation
    If enabled, the quantity of primary storage currently allocated through thin provisioning.
    Total pool space preallocated
    Space reserved from the pool for the file system for future needs to make writes more efficient. The pool may be able to reclaim some of this space if pool space is low.
    Total pool space used
    Total pool space used in the pool for the file system. This includes the allocated space and allocations for snaps and overhead. This does not include preallocated space.
    Minimum size allocated
    (Displays for file systems created on a Unity system running OE version 4.1.) Minimum quantity of primary storage allocated through thin provisioning. File shrink operations cannot decrease the file system size lower than this value.
    Protection size used
    Quantity of storage currently used for protection data.
    Protection schedule
    ID of an applied protection schedule. View protection schedules explains how to view the IDs of schedules on the system.
    Protection schedule paused
    Identifies whether an applied protection schedule is currently paused. Value is yes or no.
    FAST VP policy
    FAST VP tiering policy for the file system. This policy defines both the initial tier placement and the ongoing automated tiering of data during data relocation operations. Valid values (case-insensitive):
    • startHighThenAuto (default)—Sets the initial data placement to the highest-performing drives with available space, and then relocates portions of the storage resource's data based on I/O activity.
    • auto—Sets the initial data placement to an optimum, system-determined setting, and then relocates portions of the storage resource's data based on the storage resource's performance statistics such that data is relocated among tiers according to I/O activity.
    • highest—Sets the initial data placement and subsequent data relocation (if applicable) to the highest-performing drives with available space.
    • lowest—Sets the initial data placement and subsequent data relocation (if applicable) to the most cost-effective drives with available space.
    FAST VP distribution

    Percentage of the file system storage assigned to each tier. The format is:

    <tier_name>:<value>%

    where:
    • <tier_name> is the name of the storage tier.
    • <value> is the percentage of storage in that tier.
    CIFS synchronous write
    Identifies whether SMB synchronous writes option is enabled. Value is yes or no.
    • The SMB synchronous writes option provides enhanced support for applications that store and access database files on Windows network shares. On most SMB filesystems read operations are synchronous and write operations are asynchronous. When you enable the SMB synchronous writes option for a Windows (SMB) file system, the system performs immediate synchronous writes for storage operations, regardless of how the SMB protocol performs write operations.
    • Enabling synchronous write operations allows you to store and access database files (for example, MySQL) on SMB network shares. This option guarantees that any write to the share is done synchronously and reduces the chances of data loss or file corruption in various failure scenarios, for example, loss of power.
    Note:  Do not enable SMB synchronous writes unless you intend to use the Windows file systems to provide storage for database applications.
    The Unisphere online help provides more details on SMB synchronous write.
    CIFS oplocks
    Identifies whether opportunistic file locks (oplocks) for SMB network shares are enabled. Value is yes or no.
    • Oplocks allow SMB clients to buffer file data locally before sending it to a server. SMB clients can then work with files locally and periodically communicate changes to the system, rather than having to communicate every operation to the system over the network.
    • This feature is enabled by default for Windows (SMB) file systems. Unless your application handles critical data or has specific requirements that make this mode or operation unfeasible, leave oplocks enabled.
    The Unisphere online help provides more details on CIFS oplocks.
    CIFS notify on write
    Identifies whether write notifications for SMB network shares are enabled. Value is yes or no. When enabled, Windows applications receive notifications each time a user writes or changes a file on the SMB share.
    Note:   If this option is enabled, the value for SMB directory depth indicates the lowest directory level to which the notification setting applies.
    CIFS notify on access
    Identifies whether file access notifications for SMB shares are enabled. Value is yes or no. When enabled, Windows applications receive notifications each time a user accesses a file on the SMB share.
    Note:   If this option is enabled, the value for SMB directory depth indicates the lowest directory level to which the notification setting applies.
    CIFS directory depth
    For write and access notifications on SMB network shares, the subdirectory depth permitted for file notifications. Value range is 1-512. Default is 512.
    Replication type
    Identifies what type of asynchronous replication this file system is participating in. Valid values are:
    • none
    • local
    • remote
    Synchronous replication type
    Identifies what type of synchronous replication this file system is participating in. Valid values are:
    • none
    • remote
    Replication destination
    Identifies whether the storage resource is a destination for a replication session (local or remote). Valid values are:
    • yes
    • no
    Migration destination
    Identifies whether the storage resource is a destination for a NAS import session. Valid values are:
    • yes
    • no
    Creation time
    Date and time when the file system was created.
    Last modified time
    Date and time when the file system settings were last changed.
    Snapshot count
    Number of snapshots created on the file system.
    Pool full policy
    Policy to follow when the pool is full and a write to the file system is attempted. This attribute enables you to preserve snapshots on the file system when a pool is full. Valid values are:
    • Delete All Snaps (default for thick file systems)—Delete snapshots associated with the file system when the pool reaches full capacity.
    • Fail Writes (default for thin file systems)—Fail write operations to the file system when the pool reaches full capacity.
    Note:  This attribute is only available for existing file systems. You cannot specify this attribute when creating a file system.
    Event publishing protocols
    List of file system access protocols enabled for Events Publishing. By default, the list is empty. Valid values are:
    • nfs—Enable Events Publishing for NFS.
    • cifs—Enable Events Publishing for SMB (CIFS).
    FLR mode
    Specifies which verison of File-level Retention (FLR) is enabled. Values are:
    • enterprise
    • compliance
    • disabled
    FLR has protected files
    Indicates whether the file system contains protected files. Values are:
    • yes
    • no
    FLR clock time
    Indicates file system clock time to track the retention date. For example, 2019-02-20 12:55:32.
    FLR max retention date
    Maximum date and time that has been set on any locked file in an FLR-enabled file system. 2020-09-20 11:00:00
    FLR min retention period
    Indicates the shortest retention period for which files on an FLR-enabled file system can be locked and protected from deletion. The format is (<integer> d|m|y) | infinite. Values are:
    • d: days
    • m: months
    • y: years (default is 1 day 1d)
    • infinite

    Any non-infinite values plus the current date must be less than the maximum retention period of 2106-Feb-07.

    FLR default retention period
    Indicates the default retention period that is used in an FLR-enabled file system when a file is locked and a retention period is not specified at the file level.

    The format is (<integer> d|m|y) | infinite. Values are:

    • d: days
    • m: months
    • y: years (FLR-C compliance default is 1 year--1y)
    • infinite (FLR-E enterprise default)

    Any non-infinite values plus the current date must be less than the maximum retention period of 2106-Feb-07.

    FLR max retention period
    Indicates the longest retention period for which files on an FLR-enabled file system can be locked and protected from deletion. Values are:
    • d: days
    • m: months
    • y: years
    • infinite (default)

    The value should be greater than 1 day. Any non-infinite values plus the current date must be less than the maximum retention period of 2106-Feb-07.

    FLR auto lock enabled
    Indicates whether automatic file locking for all files in an FLR-enabled file system is enable. Values are:
    • yes
    • no
    FLR auto delete enabled
    Indicates whether automatic deletion of locked files from an FLR-enabled file system once the retention period has expired is enabled. Values are:
    • yes
    • no
    Note:  The system scans for expired files every seven days and deletes them automatically if auto-delete is enabled. The seven day period begins the day after auto-delete is enabled on the file system.
    FLR policy interval
    When Auto-lock new files is enabled, this indicates a time interval for how long to wait after files are modified before the files are automatically locked in an FLR-enabled file system.

    The format is <value> <qualifier>, where value is an integer and the qualifier is:

    • m--minutes
    • h--hours
    • d--days

    The value should be greater than 1 minute and less than 366 days.

    Error Threshold
    Specifies the threshold of used space in the file system as a percentage. When exceeded, error alert messages will be generated. The default value is 95%. If the threshold value is set to 0, this alert is disabled. This option must be set to a value greater than the Warning Threshold and Info Threshold.
    Warning Threshold
    Specifies the threshold of used space in the file system as a percentage. When exceeded, warning alert messages will be generated. The default value is 75%. If the threshold value is set to 0, this alert is disabled. This option must be set to a value less than the Error Threshold value, and greater than or equal to the Info Threshold value.
    Info Threshold
    Specifies the threshold of used space in the file system as a percentage. When exceeded, informational alert messages will be generated. The default value is 0 (disabled). This option must be set to a value less than the Warning Threshold value.

    Create file systems

    Create a multiprotocol file system, NFS file system, or CIFS (SMB) file system. You must create a file system for each type of share (NFS or CIFS) you plan to create. Once you create a file system, create the NFS or CIFS network shares and use the ID of the file system to associate it with a share.
    Note:   Size qualifiers provides details on using size qualifiers to specify a storage size.
    Prerequisites
    • Configure at least one storage pool for the file system to use and allocate at least one drive to the pool. Configure custom pools explains how to create custom pools.
    • Configure at least one NAS server to which to associate the file system. Create a NAS server explains how to configure NAS servers.
    Format
    /stor/prov/fs create [-async] -name <value> [-descr <value>] {-server <value> | -serverName <value>} {-pool <value> | -poolName <value>} -size <value> [-thin {yes | no}] [-dataReduction {yes [-advancedDedup {yes | no}] | no}] [–minSizeAllocated <value>] -type {{nfs | cifs | multiprotocol [-accessPolicy {native | Windows | Unix}] [-folderRenamePolicy {allowedAll | forbiddenSmb | forbiddenAll}] [-lockingPolicy {advisory | mandatory}]} [–cifsSyncWrites {yes | no}] [-cifsOpLocks {yes | no}] [-cifsNotifyOnWrite {yes | no}] [-cifsNotifyOnAccess {yes | no}] [-cifsNotifyDirDepth <value>] | nfs} [-fastvpPolicy {startHighThenAuto | auto | highest | lowest}] [-sched <value> [-schedPaused {yes | no}]] [-replDest {yes | no}][-eventProtocols <value>] [-flr {disabled | {enterprise | compliance} [-flrMinRet <value>] [-flrDefRet <value>] [-flrMaxRet <value>]}]
    Action qualifiers
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -name
    Type a name for the file system.
    -descr
    Type a brief description of the file system.
    -server
    Type the ID of the NAS server that will be the parent NAS server for the file system. View NAS servers explains how to view the IDs of the NAS servers on the system.
    -serverName
    Type the name of the NAS server that will be the parent NAS server for the file system.
    -pool
    Type the ID of the pool to be used for the file system.
    -poolName
    Type the name of the pool to be used for the file system. This value is case insensitive. View pools explains how to view the names of the storage pools on the system.
    -size
    Type the quantity of storage to reserve for the file system.
    -thin
    Enable thin provisioning on the file system. Valid values are:
    • yes (default)
    • no
    -dataReduction
    Specify whether data reduction is enabled for the thin file system. Valid values are:
    • yes (default)
    • no
    Note:  Data reduction is available for thin file systems in an All-Flash pool only. The thin file systems must be created on Unity systems running version 4.2.x or later.
    -advancedDedup
    Specify whether advanced deduplication is enabled for the thin file system. This option is available only after data reduction has been enabled. Valid values are:
    • yes
    • no (default)
    Note:  The thin file systems must be created on a Unity system running version 4.2.x or later. Advanced deduplication is available only on:
    • Dynamic or Traditional pools in Unity 380F, 480F, 680F, and 880F systems
    • Dynamic pools in Unity All-Flash 450F, 550F, and 650F systems
    • All-Flash pools in Unity Hybrid 380, 480, 680, and 880 systems
    -minSizeAllocated
    (Option available on a Unity system running OE version 4.1.) Specify the minimum size to allocate for the thin file system. Automatic and manual file shrink operations cannot decrease the file system size lower than this value. The default value is 3G, which is the minimum thin file system size.
    -type
    Specify the type of network shares to export from the file system. Valid values are:
    • nfs — Network shares for Linux/UNIX hosts.
    • cifs — Network shares for Windows hosts.
    • multiprotocol — Network shares for multiprotocol sharing.
    -accessPolicy
    (Applies to multiprotocol file systems only.) Specify the access policy for this file system. Valid values are:
    • native (default)
    • unix
    • windows
    -folderRenamePolicy
    (Applies to multiprotocol file systems only.) Specify the rename policy for the file system. Valid values are:
    • forbiddenSMB (default)
    • allowedAll
    • forbiddenAll
    -lockingPolicy
    (Applies to multiprotocol file systems only.) Specify the locking policy for the file system. Valid values are:
    • mandatory (default)
    • advisory
    -cifsSyncWrites
    Enable synchronous write operations for CIFS network shares. Valid values are:
    • yes
    • no (default)
    -cifsOpLocks
    Enable opportunistic file locks (oplocks) for CIFS network shares. Valid values are:
    • yes (default)
    • no
    -cifsNotifyOnWrite
    Enable to receive notifications when users write to a CIFS share. Valid values are:
    • yes
    • no (default)
    -cifsNotifyOnAccess
    Enable to receive notifications when users access a CIFS share. Valid values are:
    • yes
    • no (default)
    -cifsNotifyDirDepth
    If the value for -cifsNotifyOnWrite or -cifsNotifyOnAccess is yes (enabled), specify the subdirectory depth to which the notifications will apply. Value range is within range 1–512. Default is 512.
    -folderRenamePolicy
    Specify to rename the policy type for the specified file system. Valid values are:
    • allowedAll
    • forbiddenSmb (default)
    • forbiddenAll
    -lockingPolicy
    Set the locking policy for this type of file system. Valid values are:
    • advisory
    • mandatory (default)
    -fastvpPolicy
    Specify the FAST VP tiering policy for the file system. This policy defines both the initial tier placement and the ongoing automated tiering of data during data relocation operations. Valid values (case insensitive):
    • startHighThenAuto (default) — Sets the initial data placement to the highest-performing drives with available space, and then relocates portions of the storage resource's data based on I/O activity.
    • auto — Sets the initial data placement to an optimum, system-determined setting, and then relocates portions of the storage resource's data based on the storage resource's performance statistics such that data is relocated among tiers according to I/O activity.
    • highest — Sets the initial data placement and subsequent data relocation (if applicable) to the highest-performing drives with available space.
    • lowest — Sets the initial data placement and subsequent data relocation (if applicable) to the most cost-effective drives with available space.
    -sched
    Type the ID of a protection schedule to apply to the storage resource. View protection schedules explains how to view the IDs of the schedules on the system.
    -schedPaused
    Specify whether to pause the protection schedule specified for -sched. Valid values are:
    • yes
    • no
    -replDest
    Specifies whether the resource is a replication destination. Valid values are:
    • yes
    • no (default)
    -eventProtocols
    Specifies the comma-separated list of file system access protocols enabled for Events Publishing. By default, the list is empty. Valid values are:
    • nfs — Enable Events Publishing for NFS.
    • cifs — Enable Events Publishing for CIFS (SMB).
    -flr
    Specifies whether File-level Retention (FLR) is enabled and if so, which version of FLR is being used. Valid values are:
    • enterprise — Specify to enable FLR-E.
    • compliance — Specify to enable FLR-C.
    • disabled (default) — Specify to disable FLR.
    -flrMinRet
    Specify the shortest retention period for which files on an FLR-enabled file system will be locked and protected from deletion. Valid values are:
    • d: days (default is 1 day 1d)
    • m: months
    • y: years
    • infinite
    -flrDefRet
    Specify the default retention period that is used in an FLR-enabled file system where a file is locked, but a retention period was not specified at the file level.

    The format is (<integer> d|m|y) | infinite.

    • d: days
    • m: months
    • y: years — FLR-C (compliance) default is 1 year--1y)
    • infinite — FLR-E (enterprise) default

    Any non-infinite values plus the current date must be less than the maximum retention period of 2106-Feb-07.

    The value of this parameter must be greater than the minimum retention period -flrMinRet.

    -flrMaxRet
    Specify the maximum date and time that has been set on any locked file in an FLR-enabled file system. Values are:
    • d: days
    • m: months
    • y: years
    • infinite (default)

    The value should be greater than 1 day. Any non-infinite values plus the current date must be less than the maximum retention period of 2106-Feb-07.

    The value of this parameter must be greater than the default retention period -flrDefRet.

    Example

    The following command creates a file system with these settings:

    • Name is FileSystem01.
    • Description is "Multiprotocol file system".
    • Uses the capacity storage pool.
    • Uses NAS server nas_2 as the parent NAS server.
    • Primary storage size is 3 GB.
    • Supports multiprotocol network shares.
    • Has a native access policy.
    • Is a replication destination.

    The file system receives the ID res_28:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs create -name FileSystem01 -descr "Multiprotocol file system" -server nas_2 -pool capacity -size 3G -type multiprotocol -accessPolicy native -replDest yes
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    ID = res_28
    Operation completed successfully.
                            

    View file systems

    View details about a file system. You can filter on the file system ID.

    Note:   The show action command explains how to change the output format.
    Format
    /stor/prov/fs [{-id <value> | -name <value> | -server <value> | -serverName <value>}] show
    Object qualifiers
    Qualifier
    Description
    -id
    Type the ID of a file system.
    -name
    Type the name of a file system.
    -server
    Type the ID of the NAS server for which the file systems will be displayed.
    -serverName
    Type the name of the NAS server for which the file systems will be displayed.
    Example

    The following command lists details about all file systems on the storage system:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs show -detail
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1:    ID                              = res_1
          Name                            = fs
          Description                     =
          Health state                    = OK (5)
          Health details                  = "The component is operating normally. No action is required."
          File system                     = fs_1
          Server                          = nas_1
          Storage pool ID                 = pool_1
          Storage pool                    = pool
          Format                          = UFS64
          Protocol                        = nfs
          Access policy                   = unix
          Folder rename policy            = forbiddenSmb
          Locking policy                  = mandatory
          Size                            = 53687091200 (50.0G)
          Size used                       = 1620303872 (1.5G)
          Maximum size                    = 281474976710656 (256.0T)
          Thin provisioning enabled       = yes
          Compression enabled             = no
          Compression space saved         = 0
          Compression percent             = 0%
          Compression ratio               = 1.0:1
          Data Reduction enabled          = no
          Data Reduction space saved      = 0
          Data Reduction percent          = 0%
          Data Reduction ratio            = 1.0:1
          Advanced deduplication enabled  = no
          Current allocation              = 283140096 (270.0M)
          Preallocated                    = 2401214464 (2.2G)
          Total Pool Space Used           = 4041236480 (3.7G)
          Minimum size allocated          =
          Protection size used            = 0
          Snapshot count                  = 0
          Protection schedule             =
          Protection schedule paused      = no
          FLR mode                        = Disabled
          FLR has protected files         =
          FLR clock time                  =
          FLR max retention date          =
          FLR min retention period        =
          FLR default retention period    =
          FLR max retention period        =
          FLR auto lock enabled           =
          FLR auto delete enabled         =
          FLR policy interval             =
          Error threshold                 = 95%
          Warning threshold               = 75%
          Info threshold                  = 10%
          CIFS synchronous write          = no
          CIFS oplocks                    = no
          CIFS notify on write            = no
          CIFS notify on access           = no
          CIFS directory depth            = 512
          Replication type                = none
          Synchronous replication type    = none
          Replication destination         = no
          Migration destination           = no
          FAST VP policy                  = Start high then auto-tier
          FAST VP distribution            =
          Creation time                   = 2018-12-03 10:04:10
          Last modified time              = 2018-12-04 06:49:31
          Pool full policy                = Fail Writes
          Event publishing protocols      =
                            

    Change file system settings

    Change the settings for a file system.

    Note:   Size qualifiers explains how to use the size qualifiers when specifying a storage size.
    Format
    /stor/prov/fs {-id <value> | -name <value>} set [-async] [-descr <value>] [-accessPolicy {native | Unix | Windows}] [-folderRenamePolicy {allowedAll | forbiddenSmb | forbiddenAll}] [-lockingPolicy {advisory | mandatory}] [-size <value>] [-minSizeAllocated <value>] [-dataReduction {yes [-advancedDedup {yes | no}] | no}] [-cifsSyncWrites {yes | no}] [-fastvpPolicy {startHighThenAuto | auto | highest | lowest | none}] [-cifsOpLocks {yes | no}] [-cifsNotifyOnWrite {yes | no}] [-cifsNotifyOnAccess {yes | no}] [-cifsNotifyDirDepth <value>] [{-sched <value> | -noSched}] [-schedPaused {yes | no}] [-replDest {yes | no}] [-poolFullPolicy {deleteAllSnaps | failWrites}] [-eventProtocols <value>] [-flr [-flrMinRet <value>] [-flrDefRet <value>] [-flrMaxRet <value>] [-flrAutoLock {yes | no}] [-flrAutoDelete {yes | no}] [-flrPolicyInterval <value>]] [-errorThreshold <value>] [-warningThreshold <value>] [-infoThreshold <value>]
    Object qualifiers
    Qualifier
    Description
    -id
    Type the ID of the file system to change.
    -name
    Type the name of the file system to change.
    Action qualifiers
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -descr
    Type a brief description of the file system.
    -accessPolicy
    (Applies to multiprotocol file systems only.) Specify the access policy for the file system. Valid values are:
    • native
    • unix
    • windows
    -folderRenamePolicy
    (Applies to multiprotocol file systems only.) Specify the rename policy for the file system. Valid values are:
    • forbiddenSMB (default)
    • allowedAll
    • forbiddenAll
    -lockingPolicy
    (Applies to multiprotocol file systems only.) Specify the locking policy for the file system. Valid values are:
    • mandatory (default)
    • advisory
    -size
    Type the amount of storage in the pool to reserve for the file system.
    -minSizeAllocated
    (Option available on a Unity system running OE version 4.1.) Specify the minimum size to allocate for the thin file system. Automatic and manual file shrink operations cannot decrease the file system size lower than this value. The default value is 3G, which is the minimum thin file system size.
    -dataReduction
    Enable data reduction on the thin file system. Valid values are:
    • yes
    • no
    Note:  Data reduction is available for thin file systems in an All-Flash pool only. The thin file systems must have been created on Unity systems running version 4.2.x or later.
    -advancedDedup
    Enable advanced deduplication on the thin file system. This option is available only after data reduction has been enabled. Valid values are:
    • yes
    • no (default)
    Note:  The thin file systems must be created on a Unity system running version 4.2.x or later. Advanced deduplication is available only on:
    • Dynamic or Traditional pools in Unity 380F, 480F, 680F, and 880F systems
    • Dynamic pools in Unity All-Flash 450F, 550F, and 650F systems
    • All-Flash pools in Unity Hybrid 380, 480, 680, and 880 systems
    -cifsSyncWrites
    Enable synchronous write operations for CIFS (SMB) network shares. Valid values are:
    • yes
    • no
    -cifsOpLocks
    Enable opportunistic file locks (oplocks) for CIFS network shares. Valid values are:
    • yes
    • no
    -cifsNotifyOnWrite
    Enable to receive notifications when users write to a CIFS share. Valid values are:
    • yes
    • no
    -cifsNotifyOnAccess
    Enable to receive notifications when users access a CIFS share. Valid values are:
    • yes
    • no
    -cifsNotifyDirDepth
    If the value for -cifsNotifyOnWrite or -cifsNotifyOnAccess is yes (enabled), specify the subdirectory depth to which the notifications will apply. Value range is 1–512. Default is 512.
    -sched
    Type the ID of the schedule to apply to the file system. View protection schedules explains how to view the IDs of the schedules on the system.
    -schedPaused
    Pause the schedule specified for the -sched qualifier. Valid values are:
    • yes
    • no
    -noSched
    Unassigns the protection schedule.
    -fastvpPolicy
    Specify the FAST VP tiering policy for the file system. This policy defines both the initial tier placement and the ongoing automated tiering of data during data relocation operations. Valid values (case-insensitive):
    • startHighThenAuto (default)—Sets the initial data placement to the highest-performing drives with available space, and then relocates portions of the storage resource's data based on I/O activity.
    • auto—Sets the initial data placement to an optimum, system-determined setting, and then relocates portions of the storage resource's data based on the storage resource's performance statistics such that data is relocated among tiers according to I/O activity.
    • highest—Sets the initial data placement and subsequent data relocation (if applicable) to the highest-performing drives with available space.
    • lowest—Sets the initial data placement and subsequent data relocation (if applicable) to the most cost-effective drives with available space.
    -replDest
    Specifies whether the resource is a replication destination. Valid values are:
    • yes
    • no
    -poolFullPolicy
    Specifies the policy to follow when the pool is full and a write to the file system is tried. This attribute enables you to preserve snapshots on the file system when a pool is full. Valid values are:
    • deleteAllSnaps—Delete snapshots that are associated with the file system when the pool reaches full capacity.
    • failWrites—Fail write operations to the file system when the pool reaches full capacity.
    -eventProtocols
    Specifies a list of file system access protocols enabled for Events Publishing. By default, the list is empty. Valid values are:
    • nfs—Enable Events Publishing for NFS.
    • cifs—Enable Events Publishing for CIFS (SMB).
    -flrMinRet
    Specify the shortest retention period for which files on an FLR-enabled file system will be locked and protected from deletion. Valid values are:
    • d: days (default is 1 day 1d)
    • m: months
    • y: years
    • infinite
    -flrDefRet
    Specify the default retention period that is used in an FLR-enabled file system where a file is locked, but a retention period was not specified at the file level.

    The format is (<integer> d|m|y) | infinite.

    • d: days
    • m: months
    • y: years — FLR-C (compliance) default is 1 year--1y)
    • infinite — FLR-E (enterprise) default

    Any non-infinite values plus the current date must be less than the maximum retention period of 2106-Feb-07.

    The value of this parameter must be greater than the minimum retention period -flrMinRet.

    -flrMaxRet
    Specify the maximum date and time that has been set on any locked file in an FLR-enabled file system. Values are:
    • d: days
    • m: months
    • y: years
    • infinite (default)

    The value should be greater than 1 day. Any non-infinite values plus the current date must be less than the maximum retention period of 2106-Feb-07.

    The value of this parameter must be greater than the default retention period -flrDefRet.

    -flrAutoLock
    Specify whether automatic file locking is enabled for all new files in an FLR-enabled file system. Valid values are:
    • yes
    • no
    -flrAutoDelete
    Specify whether locked files in an FLR-enabled file system will automatically be deleted once the retention period expires. Valid values are:
    • yes
    • no
    -flrPolicyInterval
    If -flrAutoLock is set to yes, specify a time interval for how long after files are modified they will be automatically locked in an FLR-enabled file system.

    The format is <value> <qualifier>, where value is an integer and the qualifier is:

    • m--minutes
    • h--hours
    • d--days

    The value should be greater than 1 minute and less than 366 days.

    -errorThreshold
    Specify the threshold percentage that, when exceeded, error alert messages will be generated. The range is from 0 to 99, and the default value is 95%. If the threshold value is set to 0, this alert is disabled. This option must be set to a value greater than the -warningThreshold.
    -warningThreshold
    Specify the threshold percentage that, when exceeded, warning alert messages will be generated. The range is from 0 to 99, and the default value is 75%. If the threshold value is set to 0, this alert is disabled. This option must be set to a value less than the -errorThreshold value, and greater than or equal to the -infoThreshold value.
    -infoThreshold
    Specify the threshold percentage that, when exceeded, informational alert messages will be generated. The range is from 0 to 99, and the default value is 0 (disabled). This option must be set to a value less than the -warningThreshold value.
    Example

    The following command specifies Events Publishing protocols:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs -id res_1 set -eventProtocols nfs,cifs
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    ID = res_1
    Operation completed successfully.
                            

    Delete file systems

    Delete a file system.

    Note:  Deleting a file system removes all network shares, and optionally snapshots associated with the file system from the system. After the file system is deleted, the files and folders inside it cannot be restored from snapshots. Back up the data from a file system before deleting it from the storage system.
    Note:  You cannot delete an FLR-C enabled file system that has currently locked and protected files. An FLR-E file system can be deleted, even if it does contain protected files.
    Format
    /stor/prov/fs {-id <value> | -name <value>} delete [-deleteSnapshots {yes | no}] [-async]
    Object qualifiers
    Qualifier
    Description
    -id
    Type the ID of the file system to delete.
    -name
    Type the name of the file system to delete.
    Action qualifiers
    Qualifier
    Description
    -deleteSnapshots
    Specifies that snapshots of the file system can be deleted along with the file system itself. Valid values are:
    • yes
    • no (default)
    -async
    Run the operation in asynchronous mode.
    Example

    The following command deletes file system FS_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs -id res_1 delete
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
                            

    Manage user quotas for file systems and quota trees

    A user quota limits the amount of storage consumed by an individual user storing data on a file system or quota tree.

    The following table lists the attributes for user quotas:

    Table 13. Attributes for user quotas
    Attribute
    Description
    File system
    Identifier for the file system that the quota will act upon. The file system cannot be read-only or a replication destination.
    Path
    Quota tree path relative to the root of the file system. If the user quota is on a file system, either do not use this qualifier, or set its value to /.
    User ID
    User identifier on the file system.
    Unix name
    Comma-separated list of Unix user names associated with the user quota. Multiple Unix names may appear when the file system is a multiple protocol file system and multiple Unix names map to one Windows name in the user mapping configuration file (nxtmap).
    Windows SIDs
    Comma-separated list of Windows SIDs associated with the user quota.
    Note:   The number of displayed SIDs is limited to 16. If the number of SIDs is over 16, only first 16 are displayed.
    Windows name
    Comma-separated list of Windows user names associated with the user quota. Multiple Windows names may appear when the file system is a multiple protocol file system and multiple Windows names map to one Unix name in the user mapping configuration file (nxtmap).
    Note:  If the number of Windows names is over 16, only the first 16 Windows names are displayed.
    Space used
    Spaced used on the file system or quota tree by the specified user.
    Soft limit
    Preferred limit on storage usage. The system issues a warning when the soft limit is reached.
    Hard limit
    Absolute limit on storage usage. If the hard limit is reached for a user quota on a file system or quota tree, the user will not be able to write data to the file system or tree until more space becomes available.
    Grace period left
    Time period for which the system counts down days once the soft limit is met. If the user's grace period expires, users cannot write to the file system or quota tree until more space becomes available, even if the hard limit has not been reached.
    State
    State of the user quota. Valid values are:
    • OK
    • Soft limit exceeded
    • Soft limit exceeded and grace period expired
    • Hard limit exceeded

    Create a user quota on a file system or quota tree

    You can create user quotas on a file system or quota tree:

    • Create a user quota on a file system to limit or track the amount of storage space that an individual user consumes on that file system. When you create or modify a user quota on a file system, you have the option to use default hard and soft limits that are set at the file-system level.
    • Create a user quota on a quota tree to limit or track the amount of storage space that an individual user consumes on that tree. When you create a user quota on a quota tree, you have the option to use the default hard and soft limits that are set at the quota-tree level.
    Format
    /quota/user create [-async] {-fs <value> | -fsName <value>} [-path <value>] {-userId <value> | -unixName <value> | -winName <value>} {-default | [-softLimit <value>] [-hardLimit <value>]}
    Action qualifiers
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -fs
    Specify the ID of the file system that the quota will act upon. The file system cannot be read-only or a replication destination.
    -fsName
    Specify the name of the file system that the quota will act upon. The file system cannot be read-only or a replication destination.
    -path
    Specify either of the following:
    • If the user quota is for a file system, either do not use this qualifier, or set its value to /.
    • If the user quota is for a quota tree, specify the quota tree path relative to the root of the file system.
    -userId
    Specify the user ID on the file system or quota tree.
    -unixName
    Specify the UNIX user name associated with the specified user ID.
    -winName

    Specify the Windows user name associated with the specified user ID. The format is:

    [<domain>\]<name>
    -default

    Inherit the default quota limit settings for the user. To view the default limits, use the following command:

    /quota/config -fs <value> -path <value> show

    If a soft limit or hard limit has not been specified for the user, the default limit is applied.

    -softLimit
    Specify the preferred limit on storage usage by the user. A value of 0 means no limitation. If the hard limit is specified and the soft limit is not specified, there will be no soft limitation.
    -hardLimit
    Specify the absolute limit on storage usage by the user. A value of 0 means no limitation. If the soft limit is specified and the hard limit is not specified, there will be no hard limitation.
    Note:  The hard limit should be larger than the soft limit.
    Example

    The following command creates a user quota for user 201 on file system res_1, quota tree /qtree_1. The new user quota has the following limits:

    • Soft limit is 20 GB.
    • Hard limit is 50 GB.
    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /quota/user create -fs res_1 -path /qtree_1 -userId 201 -softLimit 20G -hardLimit 50G
                              Storage system address: 10.64.75.201
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
                            

    View user quotas

    You can display space usage and limit information for user quotas on a file system or quota tree.

    Because there can be a large amount of user quotas on a file system or quota tree, to reduce the impact on system performance, the system only updates user quota data every 24 hours. You can use the refresh action to update the data more often. Use the /quota/config show command to see the time spent for the data refresh.

    Note:  The Unix name and Windows name values are returned only when displaying a single user quota.
    Note:   The show action command explains how to change the output format.
    Format
    /quota/user {-fs <value> | -fsName <value>} [-path <value>] [-userId <value> | -unixName <value> | -winName <value>] [-exceeded] show
    Object qualifiers
    Qualifier
    Description
    -fs
    Specify the ID of the file system.
    -fsName
    Specify the name of the file system.
    -path
    Specify either of the following:
    • If the user quota is for a file system, either do not use this qualifier, or set its value to /.
    • If the user quota is for a quota tree, specify the quota tree path relative to the root of the file system.
    -userId
    Specify the user ID on the file system or quota tree.
    -unixName
    Specify the Unix user name.
    -winName

    Specify the Windows user name. The format is:

    [<domain>\]<name>
    -exceeded
    Only show user quotas whose state is not OK.
    Example

    The following command displays space usage information for user nasadmin on file system res_1, quota tree /qtree_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /quota/user -fs res_1 -path /qtree_1 unixName nasadmin show -detail
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1:    User ID             = 201
          Unix name           = nasadmin
          Windows names       = dell asadmin, dell asad
          Windows SIDs        = S-1-5-32-544, S-1-5-32-545      
          Space used          = 32768 (32K)
          Soft limit          = 16384 (16K)
          Hard limit          = 65536 (64K)
          Grace period left   = 7d 3h
          State               = Soft limit exceeded
                            

    Change quota limits for a specific user

    You can change limits for user quotas on a file system or quota tree.

    Format
    /quota/user {-fs | -fsName <value>} [-path <value>] {-userId <value> | -unixName <value> | winName <value>} set [-async] {-default | [-softLimit <value>] [-hardLimit <value>]}
    Object qualifiers
    Qualifier
    Description
    -fs
    Specify the ID of the file system.
    -fsName
    Specify the name of the file system.
    -path
    Specify either of the following:
    • If the user quota is for a file system, either do not use this qualifier, or set its value to /.
    • If the user quota is for a quota tree, specify the quota tree path relative to the root of the file system.
    -userId
    Specify the user ID on the file system or quota tree.
    -unixName
    Specify the UNIX user name associated with the specified user ID.
    -winName

    Specify the Windows user name associated with the specified user ID. The format is:

    [<domain>\]<name>
    Action qualifiers
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -default
    Inherit the default quota limit settings for the user. To view the default limit, use the command:

    config -fs <value> -path <value> show

    If a soft or hard limit has not been specified for the user, the default limit is applied.

    -softLimit
    Specify the preferred limit on storage usage by the user. A value of 0 means no limitation. If the hard limit is specified and the soft limit is not specified, there will be no soft limitation.
    -hardLimit
    Specify the absolute limit on storage usage by the user. A value of 0 means no limitation. If the soft limit is specified and the hard limit is not specified, there will be no hard limitation.
    Note:  The hard limit should be larger than the soft limit.
    Example

    The following command makes the following changes to the user quota for user 201 on file system res_1, quota tree path /qtree_1:

    • Sets the soft limit to 10 GB.
    • Sets the hard limit to 20 GB.
    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /quota/user -fs res_1 -path /qtree_1 -userId 201 set -softLimit 10G -hardLimit 20G
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
                            

    Refresh user quotas

    Because there can be a large amount of user quotas on a file system or quota tree, to reduce the impact on system performance, the system only updates user quota data every 24 hours. Use the refresh action to update the data more often. Use the /quota/config show command to view the time spent for the data refresh.

    Format
    /quota/user {-fs <value> | -fsName <value>} [-path <value>] refresh [-updateNames] [-async]
    Object qualifiers
    Qualifier
    Description
    -fs
    Specify the ID of the file system.
    -fsName
    Specify the name of the file system.
    -path
    Specify either of the following:
    • If the user quota is on a file system, either do not use this qualifier, or set its value to /.
    • If the user quota is on a quota tree, specify the quota tree path relative to the root of the file system.
    -updateNames
    Refresh the usage data of user quotas and the Windows user names, Windows SIDs, and Unix user names within a file system or quota tree.
    Note:  
    Refreshing user names causes latency because the system needs to query the name servers, so use this qualifier sparingly. The system automatically updates Windows user names, Windows SIDs, and Unix user names for user quotas every 24 hours.
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    Example

    The following command refreshes all user quotas on file system res_1, quota tree tree_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /quota/user -fs res_1 -path /tree_1 refresh
                              [Response]
    Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
                            

    Manage quota trees

    A quota tree is a directory that has a quota applied to it, which limits or tracks the total storage space consumed that directory. The hard limit, soft limit, and grace period settings you define for a quota tree are used as defaults for the quota tree's user quotas. You can override the hard and soft limit settings by explicitly specifying these settings when you create or modify a user quota.

    The following table lists the attributes for quota trees:

    Table 14. Attributes for quota trees
    Attribute
    Description
    File system
    Identifier for the file system.
    Path
    Quota tree path relative to the root of the file system.
    Description
    Quota tree description.
    Soft limit
    Preferred limit on storage usage. The system issues a warning when the soft limit is reached.
    Hard limit
    Absolute limit on storage usage. If the hard limit is reached for a quota tree, users will not be able to write data to tree until more space becomes available.
    Grace period left
    Period that counts down time once the soft limit is met. If the quota tree's grace period expires, users cannot write to the quota tree until more space becomes available, even if the hard limit has not been reached.
    State
    State of the user quota. Valid values are:
    • OK
    • Soft limit exceeded
    • Soft limit exceeded and grace period expired
    • Hard limit exceeded

    Create a quota tree

    Create a quota tree to track or limit the amount of storage consumed on a directory. You can use quota trees to:

    • Set storage limits on a project basis. For example, you can establish quota trees for a project directory that has multiple users sharing and creating files in it.
    • Track directory usage by setting the quota tree's hard and soft limits to 0 (zero).
    Format
    /quota/tree create [-async] { -fs <value> | -fsName <value>} -path <value> [-descr <value>] {-default | [-softLimit <value>] [-hardLimit <value>]}
    Action qualifiers
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -fs
    Specify the ID of the file system in which the quota tree will reside. The file system cannot be read-only or a replication destination.
    -fsName
    Specify the name of the file system in which the quota tree will reside. The file system cannot be read-only or a replication destination
    -path
    Specify the quota tree path relative to the root of the file system.
    -descr
    Specify the quota tree description.
    -default
    Specify to inherit the default quota limit settings for the tree. Use the View quota trees command to view these default limits.
    -softLimit
    Specify the preferred limit for storage space consumed on the quota tree. A value of 0 means no limitation. If the hard limit is specified and soft limit is not specified, there will be no soft limitation.
    -hardLimit
    Specify the absolute limit for storage space consumed on the quota tree. A value of 0 means no limitation. If the soft limit is specified and the hard limit is not specified, there will be no hard limitation.
    Note:  The hard limit should be larger than the soft limit.
    Example

    The following command creates quota tree /qtree_1 on file system res_1. The new quota tree has the following characteristics:

    • Soft limit is 100 GB.
    • Hard limit is 200 GB.
    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /quota/tree create -fs res_1 -path /qtree_1 -softLimit 100G -hardLimit 200G
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
    
                            

    View quota trees

    You can display space usage and limit information for all quota trees on a file system or a single quota tree.

    Because there can be a large amount of quota trees on a file system, to reduce the impact on system performance, the system only updates quota data every 24 hours. You can use the refresh action to update the data more often. Use the /quota/config show command to view the time spent for the data refresh.

    Note:   The show action command explains how to change the output format.
    Format
    /quota/tree {-fs <value> | -fsName <value>} [-path <value>] [-exceeded] show
    Object qualifiers
    Qualifier
    Description
    -fs
    Specify the ID of the file system.
    -fsName
    Specify the name of the file system.
    -path
    Specify the quota tree path, which is relative to the root of the file system.
    -exceeded
    Only show quota trees whose state is not OK.
    Example

    The following command displays space usage information for all quota trees on file system res_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /quota/tree -fs res_1 show -detail
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1:    Path                = /qtree_1
          Description         = this is tree 1
          Space used          = 32768 (32K)
          Soft limit          = 53687091200 (50G)
          Hard limit          = 107374182400 (100G)
          Grace period left   = 7d
          State               = OK
    
    2:    Path                = /qtree_2
          Description         =
          Space used          = 32768 (32K)
          Soft limit          = 16384 (16K)
          Hard limit          = 65536 (64K)
          Grace period left   = 7d
          State               = Soft limit exceeded
    
                            

    Set quota limits for a specific quota tree

    You can specify that a specific quota tree inherit the associated file system's default quota limit settings, or you can manually set soft and hard limits on the quota tree.

    Format
    /quota/tree {-fs <value> | -fsName <value>} -path <value> set [-async] [-descr <value>] {-default | [-softLimit <value>] [-hardLimit <value>]}
    Object qualifiers
    Qualifier
    Description
    -fs
    Specify the ID of the file system.
    -fsName
    Specify the name of the file system.
    -path
    Specify the quota tree path, which is relative to the root of the file system.
    Action qualifiers
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -descr
    Quota tree description.
    -default
    Inherit the default quota limit settings from the associated file system. To view the default limits, use the following command:

    /quota/config -fs <value> -path <value> show

    -softLimit
    Specify the preferred limit for storage space consumed on the quota tree. A value of 0 means no limitation.
    -hardLimit
    Specify the absolute limit for storage space consumed on the quota tree. A value of 0 means no limitation.
    Note:  The hard limit should be equal to or larger than the soft limit.
    Example

    The following command makes the following changes to quota tree /qtree_1 in file system res_1:

    • Sets the soft limit is 50 GB.
    • Sets the hard limit is to 100 GB.
    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /quota/tree -fs res_1 -path /qtree_1 set -softLimit 50G -hardLimit 100G
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
                            

    Refresh all quota trees on a file system

    Because there can be a large amount of quota trees on a file system, to reduce the impact on system performance, the system only updates quota data every 24 hours. You can use the refresh action to update the data more often. To view the updating time of the data refresh, see the output field Tree quota update time for the /quota/config show command.

    Format
    /quota/tree {-fs <value> | -fsName <value>} refresh [-async]
    Object qualifier
    Qualifier
    Description
    -fs
    Specify the ID of the file system.
    -fsname
    Specify the name of the file system.
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    Example

    The following command refreshes quota information for all quota trees on file system res_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /quota/tree -fs res_1 refresh /
                              [Response]
    Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
    
                            

    Delete quota trees

    You can delete all quota trees on a file system or a specified quota tree.

    Format
    /quota/tree {-fs <value> | -fsName <value>} -path <value> delete [-async]
    Object qualifier
    Qualifier
    Description
    -fs
    Specify the ID of the file system.
    -fsName
    Specify the name of the file system.
    -path
    Specify either of the following:
    • To delete all quota trees on the file system, either do not use this qualifier, or set its value to /.
    • To delete a specific quota tree, specify the quota tree path relative to the root of the file system.
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    Example

    The following command deletes quota tree /qtree_1 on file system res_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /quota/tree -fs res_1 -path /qtree_1 delete
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
                            

    Manage quota settings

    Managing quota settings includes selecting a quota policy for a file system, setting default limits for a file system or quota tree, setting a default grace period, and disabling the enforcement of space usage limits for a quota tree and user quotas on the tree.

    The following table lists the attributes for configuration quota functionality:

    Table 15. Attributes for configuring quota functionality
    Attribute
    Description
    Path
    Quota tree path relative to the root of the file system. For a file system, either do not use this attribute, or set its value to /.
    Quota policy
    (Applies to file systems only.) Quota policy for the file system. Valid values are:
    • blocks. Calculates space usage in terms of file system blocks (8 KB units). Block usage depends solely on the number of bytes added to or removed from the file. Any operation resulting in allocating or removing blocks, such as creating, expanding, or deleting a directory, writing or deleting files, or creating or deleting symbolic links changes block usage.

      When using the blocks policy, a user can create a sparse file whose size is larger than the file size, but that uses fewer blocks on the drive.

      Optionally, use this policy for NFS-only and multiprotocol file systems.

    • filesize (default). Calculates space usage in terms of logical file sizes and ignores the size of directories and symbolic links. Use the File policy in the following circumstances:
      • When you have an SMB-only file system
      • When file sizes are critical to quotas, such as when user usage is based on the size of the files created, and exceeding the size limit is unacceptable.
    User quota
    (Applies to file systems only.) Indicates whether to enforce user quotas on the file system. Valid values are:
    • on. Enable the enforcement of user quotas on the file system or quota tree.
    • off. Disable the enforcement of user quotas on the file system or quota tree.
    Note:  Because these operations impact system performance, it is recommended that you perform them only during non-peak production hours. When user quota enforcement is enabled, you can change quota settings without impacting performance.
    Deny access
    Indicates whether to enforce quota space usage limits for the file system. Value is one of the following:
    • yes. (Default) Enforce quota space usage limits for the file system or quota tree. When you choose this option, the ability to allocate space is determined by the quota settings.
    • no. Do not enforce quota functionality for the file system or quota tree. When you choose this option, the ability to allocate space will not be denied when a quota limit is crossed.
    Grace period
    Time period for which the system counts down days once the soft limit is met. If the grace period expires for a file system or quota tree, users cannot write to the file system or quota tree until more space becomes available, even if the hard limit has not been crossed.
    Default soft limit
    Default preferred limit on storage usage for user quotas on the file system, quota trees in the file system, and user quotas on the quota trees in the file system. The system issues a warning when the soft limit is reached.
    Default hard limit
    Default hard limit for on storage usage for user quotas on the file system, quota trees in the file system, and user quotas on the quota trees in the file system. If the hard limit is reached for a file system or quota tree, users will not be able to write data to the file system or tree until more space becomes available. If the hard limit is reached for a user quota on a file system or quota tree, that user will not be able to write data to the file system or tree.
    Tree quota update time
    Tree quota report updating time. The format is YYYY-MM-DD HH:MM:SS.
    User quota update time
    User quota report updating time. The format is YYYY-MM-DD HH:MM:SS.

    Configure quota settings

    You can configure quota configuration settings for a file system or quota tree.

    Format
    /quota/config {-fs <value> | -fsName <value>} [-path <value>] set [-async] {-policy {blocks | filesize} | [-userQuota {on | off | clear}] [-gracePeriod <value>] [-defaultSoft <value>] [-defaultHard <value>] [-denyAccess {yes | no}]}
    Object qualifiers
    Qualifier
    Description
    -fs
    Specify the ID of the file system for which you are configuring quota settings. The file system cannot be read-only or a replication destination.
    -fsname
    Specify the name of the file system for which you are configuring quota settings. The file system cannot be read-only or a replication destination.
    -path
    Specify the quota tree path relative to the root of the file system. For a file system, either do not use this attribute, or set its value to /.
    Action qualifiers
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -userQuota
    Indicates whether to enforce user quotas on the file system or quota tree. Valid values are:
    • on - Enable the enforcement of user quotas on the file system or quota tree.
    • off - Disable the enforcement of user quotas on the file system or quota tree. When you disable user quotas, the current user quota settings still exist unless you clear them. These settings are automatically reapplied when user quotas are re-enabled.
    • clear - Clear user quota settings after disabling a user quota.
    Because enabling and disabling the enforcement of user quotas impacts system performance, it is recommended that you perform these operations only during non-peak production hours. When user quota enforcement is enabled, you can change user quota settings without impacting performance.
    -policy
    Specify the quota policy for the file system. Valid values are:
    • blocks (Blocks policy) - Calculates space usage in terms of file system blocks (8 KB units), and includes drive usage by directories and symbolic links in the calculations.
    • filesize (File policy) - Calculates space usage in terms of logical file sizes, and ignores the size of directories and symbolic links.

    For more information, see Configure quota settings

    -gracePeriod
    Specify the time period for which the system counts down days once the soft limit is met. If the grace period expires for a quota tree, users cannot write to the quota tree until more space becomes available, even if the hard limit has not been crossed. If the grace period expires for a user quota on a file system or quota tree, the individual user cannot write to the file system or quota tree until more space becomes available for that user. The default grace period is 7 days.

    The format is:

    <value><qualifier>

    where:

    • value - An integer value, depending on the associated qualifier:
      • If the qualifier is m (minutes), the valid range is from 1 to 525600.
      • If the qualifier is h (hours), the valid range is from 1 to 8760.
      • If the qualifier is d (days), the valid range is from 1 to 365.
    • qualifier - One of the following value qualifiers (case insensitive):
      • m - Minutes
      • h - Hours
      • d - Days
    Note:  If you update a grace period value, the new value affects only the quota or quotas which will exceed the soft limit after the update is performed. Any existing quotas which have been counting down using the older grace period value will not be affected.
    -defaultSoft
    Specifies the default preferred limit on storage usage for user quotas on the file system, quota trees in the file system, and user quotas on the quota trees in the file system. The system issues a warning when the soft limit is reached.
    -defaultHard
    Specify the default hard limit for on storage usage for user quotas on the file system, quota trees in the file system, and user quotas on the file system's quota trees. If the hard limit is reached for a quota tree, users will not be able to write data to the file system or tree until more space becomes available. If the hard limit is reached for a user quota on a file system or quota tree, that particular user will not be able to write data to the file system or tree.
    Note:  The hard limit should be larger than the soft limit.
    -denyAccess
    Indicates whether to enable quota limits for the file system. Valid values are:
    • yes - Enable quota functionality for the file system. When you choose this option, the ability to allocate space is determined by the quota settings.
    • no - Disable quota functionality for the file system. When you choose this option, the ability to allocate space will not be denied when a quota limit is reached.
    Example

    The following command configures quota tree /qtree_1 in file system res_1 as follows:

    • Sets the default grace period to 5 days.
    • Sets the default soft limit 10 GB.
    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /quota/config -fs res_1 -path /qtree_1 set -gracePeriod 5d -defaultSoft 10G
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
                            

    View quota configuration settings

    You can display the quota configuration settings for a file system, a specific quota tree, or a file system and all of its quota trees.

    Format
    /quota/config {-fs <value> | -fsName <value>} [-path <value>] show
    Object qualifiers
    Qualifier
    Description
    -fs
    Specify the ID of the file system.
    -fsname
    Specify the name of the file system.
    -path
    Specify the quota tree path relative to the root of the file system. For a file system, either do not use this attribute, or set its value to /. If this value is not specified, the command displays the quota configuration of the file system level and the quota configuration of all quota tree within the specified file system.
    Example

    The following command lists configuration information for quota tree /quota/config on file system res_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /quota/config -fs res_1 show -detail
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1:    Path                     = /
          Quota policy             = blocks
          User quota               = on
          Deny access              = yes
          Grace period             = 7d
          User soft limit          = 53687091200 (50G)
          User hard limit          = 107374182400 (100G)
          Tree quota update time   = 2014-10-31 13:17:28
          User quota update time   = 2014-10-31 13:20:22
    
    2:    Path                     = /qtree_1
          Quota policy             = blocks
          User quota               = on
          Deny access              = yes
          Grace period             = 7d
          User soft limit          = 1073741824 (1G)
          User hard limit          = 10737418240 (10G)
          Tree quota update time   =
          User quota update time   =
    
    
                            

    Manage NFS network shares

    Network file system (NFS) network shares use the NFS protocol to provide an access point for configured Linux/UNIX hosts, or IP subnets, to access file system storage. NFS network shares are associated with an NFS file system.

    Each NFS share is identified by an ID.

    The following table lists the attributes for NFS network shares:

    Table 16. NFS network share attributes
    Attribute
    Description
    ID
    ID of the share.
    Name
    Name of the share.
    Description
    Brief description of the share.
    Local path
    Name of the path relative to the file system of the directory that the share will provide access to. Default is /root of the file system. A local path must point to an existing directory within the file system.
    Export path
    Export path, used by hosts to connect to the share.
    Note:   The export path is a combination of the network name or IP address of the associated NAS server and the name of the share.
    File system
    ID of the parent file system associated with the NFS share.
    Default access
    Default share access settings for host configurations and for unconfigured hosts that can reach the share. Value is one of the following:
    • ro — Hosts have read-only access to primary storage and snapshots associated with the share.
    • rw — Hosts have read/write access to primary storage and snapshots associated with the share.
    • roroot — Hosts have read-only access to primary storage and snapshots associated with the share, but the root of the NFS client has root access.
    • root — Hosts have read/write root access to primary storage and snapshots associated with the share. This includes the ability to set access controls that restrict the permissions for other login accounts.
    • na — Hosts have no access to the share or its snapshots.
    Advanced host management enabled
    Indicates whether host lists are configured by specifying the IDs of registered hosts or by using a string. (A registered host is defined by using the /remote/host command.) Values are (case insensitive):
    • yes (default) — Hosts lists contain the IDs of registered hosts.
    • no — Host lists contain comma-separated strings, with each string defining a hostsname, IP, subnet, netgroup, or DNS domain.

    For information about specifying host lists by using a string, see Specifying host lists by using a string.

    Read-only hosts
    Comma-separated list of hosts that have read-only access to the share and its snapshots. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
    Read/write hosts
    Comma-separated list of hosts that have read-write access to the share and its snapshots. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
    Read-only root hosts
    Comma-separated list of hosts that have read-only root access to the share and its snapshots. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
    Root hosts
    Comma-separated list of hosts that have read-write root access to the share and its snapshots. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
    No access hosts
    Comma-separated list of hosts that have no access to the share or its snapshots. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
    Allow SUID
    Specifies whether to allow users to set the setuid and setgid Unix permission bits. Values are (case insensitive):
    • yes (default) — Users can set the setuid and setgid Unix permission bits. This allows users to run the executable with privileges of the file owner.
    • no — Users cannot set the setuid and setgid Unix permission bits.
    Anonymous UID
    (Applies when the host does not have "allow root" access provided to it.) UID of the anonymous account. This account is mapped to client requests that arrive with a user ID of 0 (zero), which is typically associated with the user name root. The default value is 4294967294 (-2), which is typically associated with the nobody user (root squash).
    Anonymous GID
    (Applies when the host does not have "allow root" access provided to it.) GID of the anonymous account. This account is mapped to client requests that arrive with a user ID of 0 (zero), which is typically associated with the user name root. The default value is 4294967294 (-2), which is typically associated with the nobody user (root squash).
    Creation time
    Creation time of the share.
    Last modified time
    Last modified time of the share.
    Role
    The specific usage of the file share. Value is one of the following:
    • production — default for source NAS server.
    • backup — default for destination NAS server. Automatically set for all shares created on a NAS server that is acting as a replication destination. In other cases production is automatically set as a role for the NFS Share
    Minimum security
    Specifies a minimal security option that must be provided by client for nfs mount operation (in fstab). Value is one of the following, from lower to higher security level:
    • sys — No server-side authentication (server relies on NFS client authentication). Without a configured secure NFS for the NAS server this setting is default (aka AUTH_SYS security).
    • krb5 — Kerberos v5 authentication. Default when secure NFS is configured for the NAS server.
    • krb5i — Kerberos v5 authentication and integrity.
    • krb5p — Kerberos v5 authentication and integrity; encryption is enabled.
    Specifying host lists by using a string

    If advanced host management is disabled, a host list can contain a combination of network host names, IP addresses, subnets, netgroups, or DNS domains. The following formatting rules apply:

    • An IP address can be an IPv4 or IPv6 address.
    • A subnet can be an IP address/netmask or IP address/prefix length (for example: 168.159.50.0/255.255.255.0 or 168.159.50.0/24).
    • The format of the DNS domain follows the UNIX/Linux format; for example, *.example.com. When specifying wildcards in fully qualified domain names, dots are not included in the wildcard. For example, *.example.com includes one.example.com, but does not include one.two.example.com.
    • To specify that a name is a netgroup name, prepend the name with @. Otherwise, it is considered to be a host name.

    If advanced host management is enabled, host lists contain the host IDs of existing hosts. You can obtain these IDs by using the /remote/host command.

    Create NFS network shares

    Create an NFS share to export a file system through the NFS protocol.

    Note:  Share access permissions set for specific hosts take effect only if the host-specific setting is less restrictive than the default access setting for the share. Additionally, setting access for a specific host to “No Access” always takes effect over the default access setting.
    • Example 1: If the default access setting for a share is Read-Only, setting the access for a specific host configuration to Read/Write will result in an effective host access of Read/Write.
    • Example 2: If the default access setting for the share is Read-Only, setting the access permission for a particular host configuration to No Access will take effect and prevent that host from accessing to the share.
    • Example 3: If the default access setting for a share is Read-Write, setting the access permission for a particular host configuration to Read-Only will result in an effective host access of Read/Write.
    Prerequisite

    Configure a file system to which to associate the NFS network shares. Create file systems explains how to create file systems on the system.

    Format
    /stor/prov/fs/nfs create [-async] –name <value> [-descr <value>] {-fs <value> | -fsName <value>} -path <value> [-defAccess {ro |rw | roroot | root | na}] [-advHostMgmtEnabled {yes | no}] [-roHosts <value>] [-rwHosts <value>] [-roRootHosts <value>] [-rootHosts <value>] [-naHosts <value>] [-minSecurity {sys | krb5 | krb5i | krb5p}] [-allowSuid {yes | no}] [-anonUid <value>] [-anonGid <value>]
    Action qualifiers
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -name
    Type a name for the share. By default, this value, along with the network name or the IP address of the NAS server, constitutes the export path by which hosts access the share.

    You can use the forward slash character (/) to create a " virtual" name space that is different from the real path name used by the share. For example, /fs1 and /fs2 can be represented as vol/fs1 and vol/fs2. The following considerations apply:

    • You cannot use the / character as the first character of the share name.
    • An NFSv4 client cannot mount a share using a name that contains the / character. Instead the client must use the share path. To use the share path, you must set the NAS server parameter nfs.showExportLevel to 0 or 1.
    -descr
    Type a brief description of the share.
    -fs
    Type the ID of the parent file system associated with the NFS share.
    -fsName
    Type the name of the parent file system associated with the NFS share.
    -path
    Type a name for the directory on the system where the share will reside. This path must correspond to an existing directory/folder name within the share that was created from the host-side.
    • Each share must have a unique local path. The initial share is created on the root of the file system.
    • Before you can create additional network shares within an NFS file system, you must create directories within the file system. Connect to the initial NFS share from a host with access to the share and set access permissions accordingly.
    -defAccess
    Specify the default share access settings for host configurations and for unconfigured hosts that can reach the share. Value is one of the following:
    • ro — Hosts have read-only access to primary storage and snapshots associated with the share.
    • rw — Hosts have read/write access to primary storage and snapshots associated with the share.
    • roroot — Hosts have read-only access to primary storage and snapshots associated with the share. The root of the NFS client has root access.
    • root — Hosts have read/write root access to primary storage and snapshots associated with the share. This includes the ability to set access controls that restrict the permissions for other login accounts.
    • na (default) — Hosts have no access to the share or its snapshots.
    -advHostMgmtEnabled
    Specify whether host lists are configured by specifying the IDs of registered hosts or by using a string. (A registered host is defined by using the /remote/host command.) Values are (case insensitive):
    • yes (default) — Hosts lists contain the IDs of registered hosts.
    • no — Host lists contain comma-separated strings, with each string defining a hostsname, IP, subnet, netgroup, or DNS domain.

    For information about specifying host lists by using a string, see Specifying host lists by using a string.

    -roHosts
    Type the IDs of hosts that have read-only access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
    -rwHosts
    Type the IDs of hosts that have read-write access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
    -roRootHosts
    Type the IDs of hosts that have read-only root access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
    -rootHosts
    Type the IDs of hosts that have read-write root access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
    -naHosts
    Type the ID of each host configuration for which you want to block access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
    -minSecurity
    Specify a minimal security option that must be provided by client for nfs mount operation (in fstab). Value is one of the following, from lower to higher security level. All higher security levels are supported, and can be enforced by the client at negotiations for secure NFS access.
    • sys — No server-side authentication (server relies on NFS client authentication). Without a configured secure NFS for the NAS server this setting is default.
    • krb5 — Kerberos v5 authentication. Default when secure NFS is configured for the NAS server.
    • krb5i — Kerberos v5 authentication and integrity.
    • krb5p — Kerberos v5 authentication and integrity; encryption is enabled.
    -allowSuid
    Specifies whether to allow users to set the setuid and setgid Unix permission bits. Values are (case insensitive):
    • yes (default) — Users can set the setuid and setgid Unix permission bits. This allows users to run the executable with privileges of the file owner.
    • no — Users cannot set the setuid and setgid Unix permission bits.
    -anonUid
    Specify the UID of the anonymous account.
    -anonGid
    Specify the GID of the anonymous account.
    Example 1

    The following command shows output for when the path is not found because the path does not start with "/", and the shares are not created successfully.

    uemcli -u admin -p Password123! /stor/prov/fs/nfs create -name testnfs112 -fs res_26 -path "mypath"
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation failed. Error code: 0x900a002
    The system could not find the specified path. Please use an existing path. (Error Code:0x900a002)
    Job ID = N-1339
                            
    Example 2

    The following command shows output for when the path is correctly specified and the shares are successfully created. The new NFS share has the following settings:

    • NFS share name of "testnfs112"
    • Parent file system of "res_26"
    • On the directory "/mypath"
    uemcli -u admin -p Password123! /stor/prov/fs/nfs create -name testnfs112 -fs res_26 -path "/mypath"
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    ID = NFSShare_20
    Operation completed successfully.
                            

    View NFS share settings

    View details of an NFS share. You can filter on the NFS share ID or view the NFS network shares associated with a file system ID.

    Note:   The show action command explains how to change the output format.
    Format
    /stor/prov/fs/nfs [{-id <value> | -name <value> | -fs <value> | -fsName <value>}] show
    Object qualifiers
    Qualifier
    Description
    -id
    Type the ID of an NFS share.
    -name
    Type the name of an NFS share.
    -fs
    Type the ID of an NFS file system to view the associated NFS network shares.
    -fsName
    Type the name of an NFS file system to view the associated NFS network shares.
    Example

    The following command lists details for all NFS network shares on the system:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/nfs show -detail
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1:     ID                    = NFSShare_1
           Name                  = MyNFSshare1
           Description           = My nfs share
           File system           = res_26
           Local path            = /mypath
           Export path           = SATURN.domain.emc.com:/MyNFSshare1
           Default access        = na
           Advanced host mgmt.   = yes
           Read-only hosts       = 1014, 1015
           Read/write hosts      = 1016
           Read-only root hosts  =
           Root hosts            =
           No access hosts       =
           Creation time         = 2012-08-24 12:18:22
           Last modified time    = 2012-08-24 12:18:22
           Role                  = production
           Minimum security      = krb5
           Allow SUID            = yes
           Anonymous UID         = 4294967294
           Anonymous GID         = 4294967294
                            

    Change NFS share settings

    Change the settings of an NFS share.

    Format
    / stor/prov/fs/nfs {-id <value> | -name <value>} set [-async][-descr <value>] [-defAccess {ro | rw | roroot | root | na}] [-advHostMgmtEnabled {yes | no}] [-roHosts <value>] [-rwHosts <value>] [-roRootHosts <value>] [-rootHosts <value>] [-naHosts <value>] [-minSecurity {sys | krb5 | krb5i | krb5p}] [- allowSuid { yes | no }] [-anonUid <value>] [-anonGid <value>]
    Object qualifiers
    Qualifier
    Description
    -id
    Type the ID of an NFS share to change. View NFS share settings explains how to view the IDs of the NFS network shares on the system.
    -name
    Type the name of an NFS share to change.
    Action qualifiers
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -descr
    Type a brief description of the share.
    -defAccess
    Specify the default share access settings for host configurations and for unconfigured hosts who can reach the share. Value is one of the following:
    • ro – Hosts have read-only access to primary storage and snapshots associated with the share.
    • rw – Hosts have read/write access to primary storage and snapshots associated with the share.
    • roroot – Hosts have read-only root access to primary storage and snapshots associated with the share.
    • root – Hosts have read/write root access to primary storage and snapshots associated with the share. This includes the ability to set access controls that restrict the permissions for other login accounts.
    • na – Hosts have no access to the share or its snapshots.
    -advHostMgmtEnabled
    Specify whether host lists are configured by specifying the IDs of registered hosts or by using a string. (A registered host is defined by using the /remote/host command.) Values are (case insensitive):
    • yes (default) — Hosts lists contain the IDs of registered hosts.
    • no — Host lists contain comma-separated strings, with each string defining a hostsname, IP, subnet, netgroup, or DNS domain
    -roHosts
    Type the IDs of hosts that have read-only access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
    -rwHosts
    Type the IDs of hosts that have read-write access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
    -roRootHosts
    Type the IDs of hosts that have read-only root access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
    -rootHosts
    Type the IDs of hosts that have read-write root access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
    -naHosts
    Type the ID of each host configuration for which you want to block access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups
    -minSecurity
    Specifies a minimal security option that must be provided by client for NFS mount operation. Value is one of the following, from lower to higher security level. All higher security levels are supported, and can be enforced by the client at negotiations for secure NFS access.
    • sys - No server-side authentication (server relies on NFS client authentication). Also known as AUTH_SYS security.
    • krb5 - Kerberos v5 authentication.
    • krb5i - Kerberos v5 authentication and integrity.
    • krb5p - Kerberos v5 authentication and integrity; encryption is enabled.
    -allowSuid
    Specifies whether to allow users to set the setuid and setgid Unix permission bits. Values are (case insensitive):
    • yes (default) Users can set the setuid and setgid Unix permission bits. This allows users to run the executable with privileges of the file owner.
    • no - Users cannot set the setuid and setgid Unix permission bits.
    -anonUid
    Specify the UID of the anonymous account.
    -anonGid
    Specify the GID of the anonymous account.
    Example

    The following command changes NFS share NFSShare_1 to block access to the share and its snapshots for host HOST_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/nfs –id NFSShare_1 set -descr “My share” -naHosts ”HOST_1”
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    ID = NFSShare_1
    Operation completed successfully.
                            

    Delete NFS network shares

    Delete an NFS share.

    Format
    /stor/prov/fs/nfs {-id <value> | -name <value>} delete [-async]
    Object qualifiers
    Qualifier
    Description
    -id
    Type the ID of an NFS share to change. View NFS share settings explains how to view the IDs of the NFS network shares on the system.
    -name
    Type the name of an NFS share to change.
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    Example

    The following command deletes NFS share NFSShare_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/nfs –id NFSShare_1 delete
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
                            

    Manage SMB network shares

    Server Message Block (SMB) network shares use the SMB (formerly known as CIFS) protocol to provide an access point for configured Windows hosts, or IP subnets, to access file system storage. SMB network shares are associated with a SMB file system.

    Each SMB share is identified by an ID.

    The following table lists the attributes for SMB network shares:

    Table 17. SMB network share attributes
    Attribute
    Description
    ID
    ID of the share.
    Name
    Name of the share.
    Description
    Brief description of the share.
    Local path
    Name of the directory within the file system that the share provides access to.
    Export path
    Export path, used by hosts to connect to the share.
    Note:   The export path is a combination of the network name or the IP address of the associated NAS server and the name of the share.
    File system
    ID of the parent file system associated with the SMB share.
    Creation time
    Creation time of the share.
    Last modified time
    Last modified time of the share.
    Availability enabled
    Continuous availability state.
    Encryption enabled
    SMB encryption state.
    Umask
    Indicates the default Unix umask for new files created on the share. If not specified, the umask defaults to 022.
    ABE enabled
    Indicates whether an Access-Based Enumeration (ABE) filter is enabled. Valid values include:
    • yes — Filters the list of available files and folders on a share to include only those that the requesting user has access to.
    • no (default)
    DFS enabled
    Indicates whether Distributed File System (DFS) is enabled. Valid values include:
    • yes — Allows administrators to group shared folders located on different shares by transparently connecting them to one or more DFS namespaces.
    • no (default)
    BranchCache enabled
    Indicates whether BranchCache is enabled. Valid values include:
    • yes — Copies content from the main office or hosted cloud content servers and caches the content at branch office locations. This allows client computers at branch offices to access content locally rather than over the WAN.
    • no (default)
    Offline availability
    Indicates whether Offline availability is enabled. When enabled, users can use this feature on their computers to work with shared folders stored on a server, even when they are not connected to the network. Valid values include:
    • none — Prevents clients from storing documents and programs in offline cache. (default)
    • documents — All files that clients open from the share will be available offline.
    • programs — All programs and files that clients open from the share will be available offline. Programs and files will preferably open from offline cache, even when connected to the network.
    • manual — Only specified files will be available offline.

    Create CIFS network shares

    Create a CIFS (SMB) share to export a file system through the CIFS protocol.

    Prerequisite

    Configure a file system to which to associate the CIFS network shares. Create file systems explains how to create file systems on the system.

    Format
    /stor/prov/fs/cifs create [-async] –name <value> [-descr <value>] {-fs <value> | -fsName <value>} -path <value> [-enableContinuousAvailability {yes|no}] [-enableCIFSEncryption {yes|no}] [-umask <value> ] [-enableABE {yes | no} ] [-enableBranchCache {yes | no}] [-offlineAvailability {none | documents | programs | manual} ]
    Action qualifiers
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -name
    Type a name for the share.
    Note:  This value, along with the name of the NAS server, constitutes the export path by which hosts access the share.
    -descr
    Type a brief description of the share.
    -fs
    Type the ID of the parent file system associated with the CIFS share.
    -fsName
    Type the name of the parent file system associated with the CIFS share.
    -path
    Type the path to the directory within the file system that will be shared. This path must correspond to an existing directory/folder name within the share that was created from the host-side. The default path is the root of the file system. Local paths must point to an existing directory within the file system.
    • The same path on a file system can be shared an unlimited number of times, but each share name must be unique. The initial share will be created on the file system root directory.
    • Before you can create additional network shares or subdirectories within an NFS file system, you must create network shares or subdirectories within it from a Windows host that is connected to the file system. After a share has been created from a mounted host, you can create a corresponding share on the system and set access permissions accordingly.
    -enableContinuousAvailability
    Specify whether continuous availability is enabled.
    -enableCIFSEncryption
    Specify whether CIFS encryption is enabled.
    -umask
    Type the default Unix umask for new files created on the share.
    -enableABE
    Specify if Access-based Enumeration (ABE) is enabled. Valid values include:
    • yes
    • no (default)
    -enableBranchCache
    Specify if BranchCache is enabled. Valid values include:
    • yes
    • no (default)
    -offlineAvailability
    Specify the type of offline availability. Valid values include:
    • none (default) — Prevents clients from storing documents and programs in offline cache.
    • documents — Allows all files that clients open to be available offline.
    • programs — Allows all programs and files that clients open to be available offline. Programs and files will open from offline cache, even when connected to the network.
    • manual — Allows only specified files to be available offline.
    Example

    The following command creates a CIFS share with these settings:

    • Name is CIFSshare.
    • Description is “My share.”
    • Associated to file system res_1.
    • Local path on the file system is directory "/cifsshare".
    • Continuous availability is enabled.
    • CIFS encryption is enabled.

    The share receives ID CIFSShare_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/cifs create –name CIFSshare -descr “My share” –fs fs1 -path ”/cifsshare” -enableContinuousAvailability yes -enableCIFSEncryption yes
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    ID = CIFS_1
    Operation completed successfully.
                            

    View CIFS share settings

    View details of a CIFS (SMB) share. You can filter on the CIFS share ID or view the CIFS network shares associated with a file system ID.

    Note:   The show action command explains how to change the output format.
    Format
    /stor/prov/fs/cifs [{-id <value> | -name <value> | -fs <value> | -fsName <value>}]show
    Object qualifiers
    Qualifier
    Description
    -id
    Type the ID of a CIFS share.
    -name
    Type the name of a CIFS share.
    -fs
    Type the ID of a CIFS file system to view the associated CIFS network shares.
    -fsName
    Type the name of a CIFS file system to view the associated CIFS network shares.
    Example

    The following command lists details for all CIFS network shares on the system:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/cifs show
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1:     ID           = SMBShare_1
           Name         = fsmup
           Description  =
           File system  = res_1
           Local path   = /
           Export path  = \\sys-123.abc.xyz123.test.lab.emc.com\fsmup, \\10.0.0.0\fsmup
    
    2:     ID           = SMBShare_2
           Name         = fsmup
           Description  =
           File system  = res_5
           Local path   = /
           Export path  = \\sys-123.abc.xyz123.test.lab.emc.com\fsmup, \\10.0.0.0\fsmup
                            

    Change CIFS share settings

    Change the settings of an CIFS (SMB) share.

    Format
    /stor/prov/fs/cifs {-id <value> | -name <value>} set [-async] –name <value> [-descr <value>] [-enableContinuousAvailability {yes|no}] [-enableCIFSEncryption {yes|no}] [-umask <value> ] [-enableABE {yes | no} ] [-enableBranchCache {yes | no}] [-offlineAvailability {none | documents | programs | manual} ]
    Object qualifiers
    Qualifier
    Description
    -id
    Type the ID of a CIFS share to change.
    -name
    Type the name of a CIFS share to change.
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -descr
    Specifies the description for the CIFS share.
    -enableContinuousAvailability
    Specifies whether continuous availability is enabled.
    -enableCIFSEncryption
    Specifies whether CIFS encryption is enabled.
    -umask
    Type the default Unix umask for new files created on the share.
    -enableABE
    Specify if Access-Based Enumeration (ABE) is enabled. Valid values include:
    • yes
    • no
    -enableBranchCache
    Specify if BranchCache is enabled. Valid values include:
    • yes
    • no
    -offlineAvailability
    Specify the type of offline availability. Valid values include:
    • none — Prevents clients from storing documents and programs in offline cache.
    • documents — Allows all files that users open to be available offline.
    • programs — Allows all programs and files that users open to be available offline. Programs and files will open from offline cache, even when connected to the network.
    • manual — Allows only specified files to be available offline.
    Example

    The following command sets the description of CIFS share SMBShare_1 to My share.

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/cifs –id SMBShare_1 set -descr “My share”
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    ID = SMBShare_1
    Operation completed successfully.
                            

    Delete CIFS network shares

    Delete a CIFS (SMB) share.

    Format
    /stor/prov/fs/cifs {-id <value> | -name <value>} delete [-async]
    Object qualifiers
    Qualifier
    Description
    -id
    Type the ID of a CIFS share to delete.
    -name
    Type the name of a CIFS share to delete.
    Action qualifier
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    Example

    The following command deletes CIFS share CIFSShare_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/cifs –id CIFSShare_1 delete
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    Operation completed successfully.
                            

    Manage LUNs

    A LUN is a single unit of storage that represents a specific storage pool and quantity of Fibre Channel (FC) or iSCSI storage. Each LUN is associated with a name and logical unit number identifier (LUN ID).

    The following table lists the attributes for LUNs:

    Table 18. LUN attributes
    Attribute
    Description
    ID
    ID of the LUN.
    Name
    Name of the LUN.
    Description
    Brief description of the LUN.
    Group
    Name of the consistency group of which the LUN is a member.
    Storage pool ID
    ID of the storage pool the LUN is using.
    Storage pool
    Name of the storage pool the LUN is using.
    Type

    Type of LUN. Value is one of the following (case insensitive):

    • Primary
    • Thin clone (tc when used with the -create command.)
    Base storage resource

    (Applies to thin clones only) ID of the base LUN for the thin clone.

    Source

    (Applies to thin clones only) ID of the source snapshot for the thin clone.

    Original parent

    (Applies to thin clones only) ID of the parent LUN for the thin clone.

    Health state
    Health state of the LUN. The health state code appears in parentheses. Value is one of the following:
    • OK (5)—The LUN is operating normally.
    • Degraded/Warning (10)—Working, but one or more of the following may have occurred:
      • One or more of its storage pools are degraded.
      • Resource is degraded.
      • Resource is running out of space and needs to be increased.
    • Minor failure (15)—One or both of the following may have occurred:
      • One or more of its storage pools have failed.
      • Resource is unavailable.
    • Major failure (20)—One or both of the following may have occurred:
      • One or more of its storage pools have failed.
      • Resource is unavailable.
    • Critical failure (25)—One or more of the following may have occurred:
      • One or more of its storage pools are unavailable.
      • Resource is unavailable.
      • Resource has run out of space and needs to be increased.
    • Non-recoverable error (30)—One or both of the following may have occurred:
      • One or more of its storage pools are unavailable.
      • Resource is unavailable.
    Health details
    Additional health information.
    Size
    Current size of the LUN.
    Maximum size
    Maximum size of the LUN.
    Thin provisioning enabled
    Identifies whether thin provisioning is enabled. Valid values are:
    • yes
    • no (default)

    All storage pools support both standard and thin provisioned storage resources. For standard storage resources, the entire requested size is allocated from the pool when the resource is created, for thin provisioned storage resources only incremental portions of the size are allocated based on usage. Because thin provisioned storage resources can subscribe to more storage than is actually allocated to them, storage pools can be over subscribed to support more storage capacity than they actually possess.

    Note:  The Unisphere online help provides more details on thin provisioning.
    Data Reduction enabled
    Identifies whether data reduction is enabled. Valid values are:
    • yes
    • no (default)
    Note:  Data reduction is available for thin LUNs in an All-Flash pool only.
    Data Reduction space saved
    Total space saved for the LUN (in gigabytes) by using data reduction.
    Note:  Data reduction is available for thin LUNs in an All-Flash pool only.
    Data Reduction percent
    Total storage percentage saved for the LUN by using data reduction.
    Note:  Data reduction is available for thin LUNs in an All-Flash pool only.
    Data Reduction ratio
    Ratio between data without data reduction and data after data reduction savings.
    Note:  Data reduction is available for thin LUNs in an All-Flash pool only.
    Advanced deduplication enabled
    Identifies whether advanced deduplication is enabled for this LUN. This option is available only after data reduction has been enabled. An empty value indicates that advanced deduplication is not supported on the LUN. Valid values are:
    • yes
    • no (default)
    Note:  Advanced deduplication is available only on:
    • Dynamic or Traditional pools in Unity 380F, 480F, 680F, and 880F systems
    • Dynamic pools in Unity All-Flash 450F, 550F, and 650F systems
    • All-Flash pools in Unity Hybrid 380, 480, 680, and 880 systems
    Current allocation
    If thin provisioning is enabled, the quantity of primary storage currently allocated through thin provisioning.
    Total pool space preallocated
    Space reserved from the pool by the LUN for future needs to make writes more efficient. The pool may be able to reclaim some of this space if unused and pool space is running low.
    Total pool space used
    Total pool space used by the LUN.
    Non-base size used
    (Applies to standard LUNs only) Quantity of the storage used for the snapshots and thin clones associated with this LUN.
    Family size used
    (Applies to standard LUNs only) Quantity of the storage used for the whole LUN family.
    Snapshot count
    Number of snapshots created on the LUN.
    Family snapshot count
    (Applies to standard LUNs only) Number of snapshots created in the LUN family, including all derivative snapshots.
    Family thin clone count
    (Applies to standard LUNs only) Number of thin clones created in the LUN family, including all derivative thin clones.
    Protection schedule
    ID of a protection schedule applied to the LUN. View protection schedules explains how to view the IDs of the schedules on the system.
    Protection schedule paused
    Identifies whether an applied protection schedule is currently paused.
    WWN
    World Wide Name of the LUN.
    Replication destination
    Identifies whether the storage resource is a destination for a replication session (local or remote). Valid values are:
    • yes
    • no
    Creation time
    Time the resource was created.
    Last modified time
    Time the resource was last modified.
    SP owner
    Identifies the default owner of the LUN. Value is SP A or SP B.
    Trespassed
    Identifies whether the LUN is trespassed to the peer SP. Valid values are:
    • yes
    • no
    FAST VP policy
    FAST VP tiering policy for the LUN. This policy defines both the initial tier placement and the ongoing automated tiering of data during data relocation operations. Valid values (case-insensitive):
    • startHighThenAuto (default)—Sets the initial data placement to the highest-performing drives with available space, and then relocates portions of the storage resource's data based on I/O activity.
    • auto—Sets the initial data placement to an optimum, system-determined setting, and then relocates portions of the storage resource's data based on the storage resource's performance statistics such that data is relocated among tiers according to I/O activity.
    • highest—Sets the initial data placement and subsequent data relocation (if applicable) to the highest-performing drives with available space.
    • lowest—Sets the initial data placement and subsequent data relocation (if applicable) to the most cost-effective drives with available space.
    FAST VP distribution
    Percentage of the LUN assigned to each tier. The format is:

    <tier_name>:<value>%

    where:
    • <tier_name> is the name of the storage tier.
    • <value> is the percentage of storage in that tier.
    LUN access hosts
    List of hosts with access permissions to the LUN.
    Host LUN IDs
    Comma-separated list of HLUs (Host LUN identifiers), which the corresponding hosts use to access the LUN.
    Snapshots access hosts
    List of hosts with access to snapshots of the LUN.
    IO limit
    Name of the host I/O limit policy applied.
    Effective maximum IOPS
    The effective maximum IO per second for the LUN. For LUNs with a density-based IO limit policy, this value is equal to the product of the Maximum IOPS and the Size of the attached LUN.
    Effective maximum KBPS
    The effective maximum KBs per second for the LUN. For LUNs with a density-based IO limit policy, this value is equal to the product of the Maximum KBPS and the Size of the attached LUN.

    Create LUNs

    Create a LUN to which host initiators connect to access storage.

    Prerequisites

    Configure at least one storage pool for the LUN to use and allocate at least one drive to the pool. Configure custom pools explains how to create a custom storage pool on the system.

    Format
    /stor/prov/luns/lun create [-async] -name <value> [-descr <value>] [-type {primary | tc {-source <value> | -sourceName <value>}] [{-group <value> | groupName <value>}] [ {-pool <value> | -poolName <value>}] [-size <value>] [-thin {yes | no}] [-sched <value> [-schedPaused {yes | no}]] [-spOwner {spa | spb}] [-fastvpPolicy {startHighThenAuto | auto | highest | lowest}] [-lunHosts <value> [-hlus <value>]] [-snapHosts <value>] [-replDest {yes | no}] [-ioLimit <value>] [-dataReduction {yes [-advancedDedup {yes | no}] | no}]
    Action qualifiers
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -name
    Type the name of the LUN.
    -descr
    Type a brief description of the LUN.
    -type
    Specify the type of LUN. Valid values are (case insensitive):
    • primary (default)
    • tc
    -source
    (Applies to thin clones only) Specify the ID of the source object to use for thin clone creation.
    -sourceName
    (Applies to thin clones only) Specify the name of the source object to use for thin clone creation.
    -group
    (Not applicable when creating a thin clone) Type the ID of a consistency group to which to associate the new LUN. View consistency groups explains how to view information on consistency groups.
    Note:  If no consistency group is specified with -group or -groupName, the LUN is not assigned to a consistency group.
    -groupName
    (Not applicable when creating a thin clone) Type the name of a consistency group to which to associate the new LUN.
    Note:  If no consistency group is specified with -group or -groupName, the LUN is not assigned to a consistency group.
    -pool
    (Not applicable when creating a thin clone) Type the ID of the storage pool that the LUN will use.
    Note:  Value is case-insensitive.
    View pools explains how to view the names of the storage pools on the system.
    -poolName
    (Not applicable when creating a thin clone) Type the name of the storage pool that the LUN will use.
    -size
    (Not applicable when creating a thin clone) Type the quantity of storage to allocate for the LUN.
    -thin
    (Not applicable when creating a thin clone) Enable thin provisioning on the LUN. Valid values are:
    • yes
    • no (default)
    -sched
    Type the ID of a protection schedule to apply to the storage resource. View protection schedules explains how to view the IDs of the schedules on the system.
    -schedPaused
    Pause the schedule specified for the -sched qualifier. Valid values are:
    • yes
    • no (default)
    -spOwner
    (Not applicable when creating a thin clone) Specify the default SP to which the LUN will belong. The storage system determines the default value. Valid values are:
    • spa
    • spb
    -fastvpPolicy
    (Not applicable when creating a thin clone) Specify the FAST VP tiering policy for the LUN. This policy defines both the initial tier placement and the ongoing automated tiering of data during data relocation operations. Valid values (case-insensitive):
    • startHighThenAuto (default)—Sets the initial data placement to the highest-performing drives with available space, and then relocates portions of the storage resource's data based on I/O activity.
    • auto—Sets the initial data placement to an optimum, system-determined setting, and then relocates portions of the storage resource's data based on the storage resource's performance statistics such that data is relocated among tiers according to I/O activity.
    • highest—Sets the initial data placement and subsequent data relocation (if applicable) to the highest-performing drives with available space.
    • lowest—Sets the initial data placement and subsequent data relocation (if applicable) to the most cost-effective drives with available space.
    -lunHosts
    Specify a comma-separated list of hosts with access to the LUN.
    -hlus
    Specifies the comma-separated list of Host LUN identifiers to be used by the corresponding hosts which were specified in the -lunHosts option. The number of items in the two lists must match. However, an empty string is a valid value for any element of the Host LUN identifiers list, as long as commas separate the list elements. Such an empty element signifies that the system should automatically assign the Host LUN identifier value by which the corresponding host will access the LUN.

    If not specified, the system will automatically assign the Host LUN identifier value for every host specified in the -lunHosts argument list.

    -snapHosts
    Specify a comma-separated list of hosts with access to snapshots of the LUN.
    -replDest
    (Not applicable when creating a thin clone) Specifies whether the resource is a replication destination. Valid values are:
    • yes
    • no (default)
    -ioLimit
    Specify the name of the host I/O limit policy to be applied.
    -dataReduction
    (Not applicable when creating a thin clone) Specify whether data reduction is enabled for this LUN. Valid values are:
    • yes
    • no (default)
    Note:  Data reduction is available for thin LUNs in an All-Flash pool only.
    -advancedDedup
    Specify whether advanced deduplication is enabled for this LUN. This option is available only after data reduction has been enabled. An empty value indicates that advanced deduplication is not supported on the LUN. Valid values are:
    • yes
    • no (default)
    Note:  Advanced deduplication is available only on:
    • Dynamic or Traditional pools in Unity 380F, 480F, 680F, and 880F systems
    • Dynamic pools in Unity All-Flash 450F, 550F, and 650F systems
    • All-Flash pools in Unity Hybrid 380, 480, 680, and 880 systems
    Example 1

    The following command creates a LUN with these settings:

    • Name is MyLUN.
    • Description is “My LUN.”
    • Associated with LUN consistency group group_1.
    • Uses the pool_1 storage pool.
    • Primary storage size is 100 MB.

    The LUN receives the ID lun_1:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/lun create -name "MyLUN" -descr "My LUN" -type primary -group group_1 -pool pool_1 -size 100M
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    ID = lun_1
    Operation completed successfully.
                            
    Example 2

    The following command creates a thin clone called MyTC from SNAP_1. The thin clone receives the ID lun_3.

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/lun create -name "MyTC" -descr "My FC" -type tc -source SNAP_1
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    ID = lun_3
    Operation completed successfully.
                            

    View LUNs

    Display the list of existing LUNs.

    Note:   The show action command explains how to change the output format.
    Format
    /stor/prov/luns/lun [{-id <value> | name <value> | -group <value> | -groupName <value> | -standalone}] [-type {primary | tc [{-baseRes <value> | -baseResName <value> | -originalParent <value> | -originalParentName <value> | -source <value> | -sourceName <value>}]}] show
    Object qualifiers
    Qualifier
    Description
    -id
    Type the ID of a LUN.
    -name
    Type the name of a LUN.
    -group
    Type the ID of a consistency group. The list of LUNs in the specified consistency group are displayed.
    -groupName
    Type the name of a consistency group. The list of LUNs in the specified consistency group are displayed.
    -standalone
    Displays only LUNs that are not part of a consistency group.
    -type

    Identifies the type of resources to display. Valid values are (case insensitive):

    • primary
    • tc
    -baseRes
    (Applies to thin clones only) Type the ID of a base LUN by which to filter thin clones.
    -baseResName
    (Applies to thin clones only) Type the name of a base LUN by which to filter thin clones.
    -originalParent
    (Applies to thin clones only) Type the ID of a parent LUN by which to filter thin clones.
    -originalParentName
    Applies to thin clones only) Type the name of a parent LUN by which to filter thin clones.
    -source
    (Applies to thin clones only) Type the ID of a source snapshot by which to filter thin clones.
    -sourceName
    (Applies to thin clones only) Type the name of a source snapshot by which to filter thin clones.
    Example 1

    The following command displays information about all LUNs and thin clones on the system:

    uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/lun show -detail
                              Storage system address: 10.0.0.1
    Storage system port: 443
    HTTPS connection
    
    1:    ID                             = sv_1
          Name                           = AF LUN 1
          Description                    =
          Group                          =
          Storage pool ID                = pool_1
          Storage pool                   = Pool 1
          Type                           = Primary
          Base storage resource          = sv_1
          Source                         =
          Original parent                =
          Health state                   = OK (5)
          Health details                 = "The LUN is operating normally. No action is required."
          Size                           = 21474836480 (20.0G)
          Maximum size                   = 281474976710656 (256.0T)
          Thin provisioning enabled      = yes
          Compression enabled            = yes
          Compression space saved        = 5637144576 (5.2G)
          Compression percent            = 44%
          Compression ratio              = 1.8:1
          Data Reduction enabled         = yes
          Data Reduction space saved     = 5637144576 (5.2G)
          Data Reduction percent         = 44%
          Data Reduction ratio           = 1.8:1
          Advanced deduplication enabled = no
          Current allocation             = 4606345216 (4.2G)
          Protection size used           = 0
          Non-base size used             = 0
          Family size used               = 12079595520 (11.2G)
          Snapshot count                 = 2
          Family snapshot count          = 2
          Family thin clone count        = 0
          Protection schedule            = snapSch_1
          Protection schedule paused     = no
          WWN                            = 60:06:01:60:10:00:43:00:B7:15:A5:5B:B1:7C:01:2B
          Replication destination        = no
          Creation time                  = 2018-09-21 16:00:55
          Last modified time             = 2018-09-21 16:01:41
          SP owner                       = SPB
          Trespassed                     = no
          LUN access hosts               = Host_2
          Host LUN IDs                   = 0
          Snapshots access hosts         =
          IO limit                       =
          Effective maximum IOPS         = N/A
          Effective maximum KBPS         = N/A
                            

    Change LUNs

    Change the settings for a LUN.

    Format
    /stor/prov/luns/lun {-id <value> | -name <value>} set [-async] [-name <value>] [-descr <value>] [-size <value>] [{-group <value> | -groupName <value> | -standalone}] [{-sched <value> | -noSched}] [-schedPaused {yes | no}] [-spOwner {spa | spb}] [-fastvpPolicy {startHighThenAuto | auto | highest | lowest}] [-lunHosts <value> [-hlus <value>]] [-snapHosts <value>] [-replDest {yes | no}] [-ioLimit <value> | -noIoLimit] [-dataReduction {yes [-advancedDedup {yes | no}] | no}]
    Object qualifiers
    Qualifier
    Description
    -id
    Type the ID of the LUN to change.
    -name
    Type the name of the LUN to change.
    Action qualifiers
    Qualifier
    Description
    -async
    Run the operation in asynchronous mode.
    -name
    Type the name of the LUN.
    -descr
    Type a brief description of the LUN.
    -group
    (Not applicable to thin clones) Type the ID of a consistency group to which to associate the new LUN. View consistency groups explains how to view information on consistency groups.
    Note:  If no consistency group is specified with -group or -groupName, the LUN is not assigned to a consistency group.
    -groupName
    (Not applicable to thin clones) Type the name of a consistency group to which to associate the new LUN.
    Note:  If no consistency group is specified with -group or -groupName, the LUN is not assigned to a consistency group.
    -size
    Type the quantity of storage to allocate for the LUN.
    -standalone
    (Not applicable to thin clones) Remove the LUN from the consistency group.
    -sched