Configuring Arrays for Use with VPLEX

By | March 12, 2017

Discovering arrays

Note:  In releases before GeoSynchrony Release 5.1 Patch 2, when allocating LUNs to a VPLEX from a storage array that is already being actively used by the VPLEX, no more than 10 LUNs should be allocated at a time.

After a set of no more than 10 LUNs have been allocated (before Release 5.1 Patch 2), check the VPLEX to confirm that all 10 have been discovered before the next set is allocated. Attempting to allocate more than 10 LUNs at one time, or in rapid succession, can cause VPLEX to treat the array as if it were faulted. This precaution does not need to be followed when the array is initially introduced to the VPLEX, before it is an active target of I/O.

Note:  VPLEX only supports block-based storage devices that use 512-byte sectors for allocation or addressing, hence ensuring the storage array connecting to VPLEX supports or emulates the same. A storage device that does not use 512-byte sectors can be discovered by VPLEX but cannot be claimed for use within VPLEX and cannot be used to create a meta-volume. When you try to use discovered storage volumes with unsupported block sizes within VPLEX (either by claiming them or for creating meta-volume using appropriate VPLEX CLI commands), the command fails with this error - the disk has an unsupported disk block size and thus can't be moved to a non-default spare pool.

For metadata volumes

Volumes that will be used for logging volumes must meet requirements specified in the configuration guide. Those volumes must be clean (have zeros written) before they can be used.

An example how to clean all data from the given disk:     

  1. [   ]    Expose disk that will be used for metadata to the Linux host

  2. [   ]    Write zeros to the disk using the following command:

WARNING:    This command will erase all data on the disk.

dd if=/dev/zero of=device name conv=notrunc

Example:

dd if=/dev/zero of=/dev/sdbg conv=notrunc

Initiator settings on back-end arrays

The EMC Simple Support Matrix on the EMC® Support website lists the storage arrays that have been qualified for use with VPLEX™.

The following table identifies the initiator settings for these arrays when configuring for use with VPLEX.

Storage array family

Model

Vendor

Product ID

Initiator settings

EMC Symmetrix®

 

EMC

SYMMETRIX

See “EMC Symmetrix” on page 7

EMC CLARiiON®

 

EMC

CLARIION

See “EMC CLARiiON” on page 9

HDS USP/HPXP

 

Hitachi

OPEN

 

Default (Standard)

HDS VSP/HP P9500

 

Hitachi 9900 series (Lightning)

 

HDS 9910

Hitachi

 

OPEN

 

Default (Standard)

HDS 9960

HDS 9970

HDS 9980

Hitachi USP series (TagmaStore)

HDS TagmaStore NSC55

HDS Tagmastore USP100

HDS TagmaStore USP600

HDS TagmaStoreUSP1100

Hitachi USP VM series

HDS USP VM

Hitachi AMS 2xxx series

HDS AMS 2100

Hitachi

 

DF600F

 

Windows

HDS AMS 2300

HDS AMS 2500

Sun/HDS 99xx series

 

Hitachi

OPEN

Default (Standard)

IBM DS4700

IBM DS4700

IBM

OPEN-V

Linux

IBM DS8000 series

IBM DS8100

IBM

 

2107900

Windows 2000/2003

IBM DS8300

IBM SVC

SVC

IBM

2145

Generic

IBM XIV

XIV

IBM

2810XIV

Default (Standard)

3PAR

3PAR

3PARdata

 

VV

Generic or Generic ALUA (if applicable)

Fujitsu DX8x00, ETERNUS 8000
M1200/M2200

ETERNUS 8000

Fujitsu

 

E8000

Linux

ETERNUS DX8000

ETERNUS_DX800

HP EVA 4/6/8000, 4/6/8100 and 4/6/8400

HP EVA 4000 AA

HP or COMPAQ

 

HSV101

Linux

HP EVA 4100 AA

HSV200

HP EVA 4400 AA

HSV300

HP EVA 6000 AA

HSV200

HP EVA 6100 AA

HSV200

HP EVA 6400 AA

HSV400

HP EVA 8000 AA

HSV210

HP EVA 8100 AA

HSV210

HP EVA 8400 AA

HSV450

HP StorageWorks XP 48/128/512/1000/10000/12000/20000/24000

HP XP48

HP or COMPAQ

 

OPEN

 

 

Default  (Standard)

HP XP512

HP XP128

HP XP1024

HP XP10000

HP XP12000

HP XP20000

HP XP24000

NetApp FAS/V 3xxx/6xxx series

 

NETAPP

LUN

Linux

 

The following sections describe the steps to configure the arrays for use with VPLEX.

EMC Symmetrix

For Symmetrix-to-VPLEX connections, configure the Symmetrix Fibre Channel directors (FAs) as shown in Table 1.

Table 1       Required Symmetrix FA bit settings for connection to VPLEX

Set *

Do not set

Optional

SPC-2 Compliance (SPC2)

SCSI-3 Compliance (SC3)

Enable Point-to-Point (PP)

Unique Worldwide Name (UWN)

Common Serial Number (C)

For Release 5.2 and later:

– OS-2007 (OS compliance)

Disable Queue Reset on Unit Attention (D)

AS/400 Ports Only (AS4)

Avoid Reset Broadcast (ARB)

Environment Reports to Host (E)

Soft Reset (S)

Open VMS (OVMS)

Return Busy (B)

Enable Sunapee (SCL)

Sequent Bit (SEQ)

Non Participant (N)

For releases before Release 5.2:

– OS-2007 (OS compliance)

Linkspeed

Enable Auto-Negotiation (EAN)

VCM/ACLX **

 

*  For the Symmetrix 8000 series, only the PP, UWN, and C bits must be set.

**  Must be set if VPLEX is sharing Symmetrix directors with hosts that require conflicting bit settings. For any other configuration, the VCM/ACLX bit can be either set or not set.

Note:  The EMC Host Connectivity Guides on the EMC Support website provide more information on Symmetrix connectivity to VPLEX.

Procedure to enable OS2007 (Required for operation on 5.2 and later)

The OS2007 bit on Symmetrix/VMAX FA’s which are connected to VPLEX back-end ports should be enabled from VPLEX GeoSynchrony Release 5.2 onward. The enabling of this bit on Symmetrix allows VPLEX to detect (in the presence of host I/Os) configuration changes on the array storage-view and react to it by automatically re-discovering the back-end storage-view and detecting LUN re-mapping issues.

  1. [   ]    Ensure that the VPLEX connected to the Symmetrix/VMAX is on GeoSynchrony Release 5.2 or higher.

  2. [   ]    As recommended for VPLEX, ensure that SPC-2 is set on the ports/storage group that has the VPLEX back-end initiators attached/referenced.

  3. [   ]    Follow Symmetrix/VMAX documentation to set the OS2007 bit on the FA. If the FA is also connected (masked) with initiator ports other than VPLEX, ensure that those initiators do not get impacted by this configuration change.

  4. [   ]    Set OS2007 flag on a Symm Target port using symconfigure

a.   Parse the command for enabling the OS2007 bit on SYMM DIR FA port 10e:0

symconfiguresid  SYMM ID  -cmd “set port  10e:0  SCSI_Support1=ENABLE;”  preview

b.   Execute the command for enabling the OS2007 bit on SYMM DIR FA port 10e:0

symconfiguresid  SYMM ID  -cmd “set port  10e:0  SCSI_Support1= ENABLE;”  commit   

  5. [   ]    If OS2007 flag cannot be set on Symm Target port ( if the port is shared between VPLEX and non-VPLEX initiators), then the following symmaccess commands can be used to set it:

symaccess -sid SymmID -wwn wwn | -iscsi iscsi

 set hba_flags [on flag,flag,flag... [-enable |-disable] |

   off [flag,flag,flag...]]

   list logins [-dirport Dir:Port] [-v]

...

   flag             Specify the overridden HBA port flags or

                    initiator group port flags from the

                    following values in []:

                    Supported HBA port flags:

                    - Common_Serial_Number     [C]

                    - Disable_Q_Reset_on_UA    [D]

                    - Environ_Set              [E]

                    - Avoid_Reset_Broadcast    [ARB]

                    - AS400                    [AS4]

                    - OpenVMS                  [OVMS]

                    - SCSI_3                   [SC3]

                    - SPC2_Protocol_Version    [SPC2]

                    - SCSI_Support1            [OS2007]

                     Supported initiator group port flags:

                    - Volume_Set_Addressing    [V]

 6. [   ]    If there are multiple FA ports where the OS2007 bit needs to be enabled, they can be done sequentially.

 7. [   ]    Ensure that the OS2007 bit is enabled on all FA ports connected to VPLEX on the Symmetrix/VMAX array.

This procedure is non-disruptive to Host I/O to VPLEX and requires no specific steps on VPLEX.

Notes on thin provisioning support in GeoSynchrony 4.x

·    VPLEX tolerates thinly provisioned devices. However, VPLEX copy and mobility operations (such as migrations and mirrors) do not preserve thinness. The target device is converted into a fully allocated device. After a copy/mobility operation is complete, use zero block reclaim or a similar array-specific utility to make the target device thin again.

·    System volumes such as metadata and logging volumes are supported on thin devices. However, all extents should be pre-allocated, to prevent out-of-space conditions.

·    Oversubscribed thin devices are not supported as system devices.

Note:  Refer to Symmetrix best practices documentation for more information on thin provisioning.

EMC CLARiiON

Set the following for CLARiiON-to-VPLEX attachment:

Note:  On CLARiiON VNX, you can do this when registering VPLEX initiators on the Host > Connectivity Status screen. Refer to Registering VPLEX initiators with CLARiiON VNX arrays on page 92 for more information on registering VPLEX initiators.

·    Initiator type = CLARiiON Open

·    Failover Mode =  4 for ALUA mode, 1 for non-ALUA

·    (Active-passive array only) Auto-switch = True

Note:  Recommended number of LUNs being added at one time is limited to 40.

To add LUNs to a VNX storage group, follow these steps:

  1. [   ]    Click Storage in the Unisphere GUI.

  2. [   ]    Click LUNs.

  3. [   ]    Select the LUNs to add to a storage group.

  4. [   ]    Click Add to Storage Group.

Note:  The EMC Host Connectivity Guides on EMC Support Online provide more information on CLARiiON connectivity to VPLEX.

Additional requirements for CLARiiON VNX:

·    OE for Block V31: Only block-based CLARiiON arrays with Flare R31 are supported. Filesystem-based mode is not supported.

·    You must activate any SAN Copy LUNs configured on CLARiiON VNX before exporting them to VPLEX.

·    When claiming CLARiiON LUNs through VPLEX, use the naviseccli command naviseccli getlun
uid
  name to create a device mapping file.

Note:  The naviseccli command has to be run on the Clariion.

Example : naviseccli -h 192.168.47.27 getlun -uid -name > Clar0400.txt
The file names determine the array name; in this example, storage volumes from the CLARiiON would get the Clar0400_ prefix.

·    The recommended number of LUNs being added at one time is limited to 40.

·    The recommended steps to add LUNs to a VNX storage group:

a.   Click Storage in the Unisphere GUI.

b.   Click LUNs.

c.   Select the LUNs to add to a storage group.

d.   Click Add to Storage Group.

·    For Array Interoperability: restrictions for VNX2 (Rockies)

a.   VPLEX supports both failover modes of NON-ALUA and ALUA.

b.   Failover mode changes from NON-ALUA to ALUA mode are NOT supported.

c.   Failover mode changes from ALUA to NON-ALUA are supported.

d.   When VNX (Rockies) is connected to VPLEX for the first time, select failover mode BEFORE provision LUs and DO NOT change it.

Notes on thin provisioning support in GeoSynchrony 4.x

·    VPLEX tolerates thinly provisioned devices. However, VPLEX copy and mobility operations (such as migrations and mirrors) do not preserve thinness. The target device is converted into a fully allocated device. After a copy/mobility operation is complete, use zero block reclaim or a similar array-specific utility to make the target device thin again.

·    System volumes such as metadata and logging volumes are supported on thin devices. However, all extents should be pre-allocated, to prevent out-of-space conditions.

·    Oversubscribed thin devices are not supported as system devices.

Note:  Refer to CLARiiON best practices documentation for more information on thin provisioning.

HP 3PAR V/T/S/F/Pxxx storage arrays

Depending on a host persona setting, HP 3PAR Storage Arrays can be presented as either Active/Active or ALUA starting in GeoSynchrony Release 5.2

To provision HP 3PAR LUNs to VPLEX:

                   

  1. [   ]    Zone the HP 3PAR storage array to the VPLEX back-end ports. Follow the recommendations in the Implementation and Planning Best Practices for EMC VPLEX Technical Notes.

Note on zoning: To prevent data unavailability, ensure that each host in a storage view has paths to at least two directors in a cluster, and that multi-pathing is configured in such a way that the paths are distributed evenly between directors A and B in each engine.

Note:   When exporting storage LUs from 3PAR to VPLEX, never use logical unit number 0xfe (254 in the 3PAR GUI as listed in decimal notation). Leave this logical unit number unused so that the SES LU can keep that logical unit number uncontested.

Note:   Avoid using the Auto checkbox for selecting the logical unit number (LUN) in the 3PAR export dialog. This checkbox is checked by default. This feature chooses the lowest available logical unit numbers, which can include 0xfe/254 for one of the storage LUs. Insteaed, manually choose logical unit numbers that do not include 0xfe.

  2. [   ]    To login to the 3PAR Inform Management console, click on 3PAR Management Console, enter IP, username and password

  3. [   ]    To create a common provisioning group (CPG), right click on CPG and select Create common provisioning group.

Figure 1       

  4. [   ]    Follow the CPG wizard to create the common provisioning group with aCPG name, device, and RAID Type

Figure 2       

  5. [   ]    To create virtual volume, In the left panel, right click on virtual volumes and select Create virtual volumes.

Figure 3       

  6. [   ]    In the Create virtual volumes wizard, fill in the information for LUN creation such as Name, Size, Provisioning, CPG, and Count.

Figure 4       

  7. [   ]    To create a host, in the left panel, right click on Hosts, select Create host, and select a host.

Figure 5       

  8. [   ]    To verify that the virtual volumes are exported, in the Storage Systems screen, select Export.

Figure 6       

  9. [   ]    Click on VLUNs to map volumes to servers.

10. [   ]    In the Export Virtual Volume wizard, select the volumes and hosts and click Next

Note:  The Auto box is not selected. This the manual way to provision LUNs

Figure 7       

11. [   ]    Click Exported under under Virtual Volumes, to verify LUNs are mapped to the host

Figure 8       

12. [   ]    To verify that the host is connected to the devices, right click Ports listed under array Storage Systems in the left panel.

Figure 9       

13. [   ]    If you need to change the persona of the hose, use the Create hosts wizard.

·    1 – Generic --  3PAR array presents as Active/Active array on VPLEX.

·    2 – Generic-ALUA --  3PAR array presents as Implicit ALUA array on VPLEX. All the back-end paths to each LUN will be Active paths (AAO) only.

Figure 10    

14. [   ]    In the Create Hosts wizard, select an initiator and a port for the host.

Figure 11    

15. [   ]    In the left panel, right-cick on Virtual Volume set and select Create virtual volume set.

Figure 12    

16. [   ]    List the virtual volumes exported to the host. Select the Virtual Volumes to provision.

Figure 13    

17. [   ]    On the VPLEX management server, log in to the VPlexcli, and type the following commands:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

array re-discover array_name

ll /clusters/cluster-Cluster_ID/storage-elements/storage-arrays/3PAR-Array-name/logical-units

HPXP 24000/20000/12000/10000/1000/512/128/48

Note:  HP XP 24000/20000/12000/10000/1000/512/128/48 arrays must have a LUN0 exported to VPLEX. If no LUN0 exists on the array, the report lun command fails with error code 5/25/0 -- Logical unit not supported.

To provision HDS/HPXP LUNs for VPLEX:

                   

  1. [   ]    Log in into the HDS/XP Remote Web Console.

  2. [   ]    From the menu bar, select Go > LUN Manager  >  LU Path and Security.

  3. [   ]    In the LU Path list, select a port and  ensure that LUN Security is enabled:

(LUN Security:Enable ) Target Standard

 

  4. [   ]    Perform LUN masking on the port to which LUNs will be exposed.

 

  5. [   ]    If you need to change the port’s host group to Standard:

        

a.   Click the pen icon on the toolbar, and select Change Host Group from the drop-down menu.

b.   Change the host group’s Host Mode to 00(Standard) and click Apply.

  6. [   ]    On the VPLEX management server, log in to the VPlexcli, and type the following commands:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

 

array re-discover array_name

 

  7. [   ]    At the VPlexcli prompt, type the following command to display the new LUNs:

ll /clusters/cluster-Cluster_ID/storage-elements/storage-volumes/

HP P6300/P6500

HP P6300/P6500 supports implicit-explicit ALUA mode.

To provision HP P6300/P6500 LUNs to VPLEX:

                   

  1. [   ]    Zone the HP P6300/P6500 storage to the VPLEX back-end ports. Follow the recommendations in the Implementation and Planning Best Practices for EMC VPLEX Technical Notes.

 

  1. [   ]    Log into HP P6000 Command View GUI by logging into the array management server, opening Internet Explorer to https://localhost:2374/SPoG/, and entering credentials.
  2. [   ]    If necessary, create a Disk Group:

        

  1. On the GUI, navigate to Storage Network > storage_system_name > Disk Groups
  2. Select Create Disk Group.
  3. Enter the name and number of disks.
  4. Modify advanced settings as desired.
  5. Click Create Disk Group.

Figure 14    

  1. [   ]    Create a Host:

        

  1. On the GUI, navigate to Storage Network > storage_system_name > Hosts
  2. Select Add Host.
  3. Enter a name.
  4. Select Fibre channel for the type.
  5. Select port WWNs (please select VPLEX BE port WWNs) in the pull-down menu (or enter them manually if they do not appear).
  6. Select Linux for the operating system.
  7. Modify advanced settings as desired.
  8. Click Add Host.

 

Figure 15    

  1. [   ]    Create Virtual Disk(s):

        

  1. In the GUI, navigate to Storage Network > storage_system_name > Virtual Disks.
  2. Click Create Vdisks.
  3. Provide a quantity, provide a name, provide a size, select a redundancy, and select a disk group.
  4. Modify advanced settings as desired.
  5. Click Create Vdisks.

Figure 16    

  1. [   ]    Verify the Vdisk Properties of newly created virtual disks.

Figure 17    

  1. [   ]    At this point, the newly created virtual disks are not presented to a host yet.

Figure 18    

  1. [   ]    Present Virtual Disks to the Host:

        

  1. In the GUI, navigate to Storage Network > storage_system_name > Hosts > host_name.
  2. Switch to Presentation tab
  3. Select Present.

Figure 19    

 

  1. [   ]    Select virtual disk(s) to present and select Confirm Selections.
  2. [   ] Select LUN IDs if desired and select Present Vdisk.

Figure 20    

  1. [   ] Verify that the virtual disks are presented to the host under the presentation tab.

Figure 21    

  1. [   ] On the VPLEX management server, log in to the VPlexcli, and type the following commands:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

 

array re-discover array_name

 

  1. [   ] At the VPLEXcli prompt, type the following command to display the new LUNs: 

ll /clusters/cluster-Cluster_ID/storage-elements/storage-volumes/ 

Leave a Reply

Your email address will not be published. Required fields are marked *