Download HPE 3PAR StoreServ Persistent Ports technical white paper

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
HPE 3PAR StoreServ
Persistent Ports
Contents
Terminology................................................................................................................................................................................................................................................................................... 2
Executive summary ................................................................................................................................................................................................................................................................. 2
Features and benefits ........................................................................................................................................................................................................................................................... 2
What’s new with Persistent Ports on HPE 3PAR? .......................................................................................................................................................................................3
What does it do? .........................................................................................................................................................................................................................................................................3
Requirements ............................................................................................................................................................................................................................................................................... 4
Configuration considerations ......................................................................................................................................................................................................................................... 4
How does it work? .................................................................................................................................................................................................................................................................... 7
Setting the FC switches for NPIV ............................................................................................................................................................................................................................... 9
Best practice, zoning, and multi-pathing considerations for FC and FCoE ........................................................................................................................10
Firmware upgrades with Persistent Ports .........................................................................................................................................................................................................11
Using Matched Set VLUN templates with Persistent Ports.............................................................................................................................................................. 11
HPE Virtual Connect considerations with Persistent Ports.............................................................................................................................................................. 12
Technical white paper
Technical white paper
Page 2
Terminology
• N_Port ID Virtualization (NPIV): A Fibre Channel (FC) facility that allows multiple N_Port IDs to share a single physical N-Port on an
FC switch.
• Host facing port: A port on a host bus adapter (HBA) in an HPE 3PAR StoreServ array that has been configured to provide host access to
the array.
• Partner nodes: A pair of nodes on an HPE 3PAR StoreServ array that are connected to a common set of disk enclosures (chassis).
• Host Port Native Identity: The primary port identity of an HPE 3PAR host facing port (for example, 0:1:1).
• Host Port Guest Identity: A host port’s “Guest” identity is associated with a corresponding host facing port’s “Native” identity on its partner
node (For example, on Native port 0:1:1 the Guest identity would be 1:1:1.). This corresponding Guest identity on the partner node will be the
same slot and port as the partner node’s Native identity. The host port “Guest” identity is used to log into the SAN fabric by a node’s partner if
the node owning the Native identity becomes unavailable.
• FCoE: FCoE is a computer network technology that encapsulates FC frames onto lossless 10 or 100 Gb Ethernet networks.
• iSCSI: iSCSI is an IP-based protocol for linking data storage devices over an IP network and transferring data by carrying SCSI commands over
IP networks.
Executive summary
Mission-critical tier-1 storage environments require extreme high availability (HA). Tier-1 customers running hundreds (or thousands) of servers
in an enterprise environment feel that a dependency on host multipathing failover software during firmware upgrades, node failures, or in
response to a “loss_sync” event (a physical layer problem between the storage array and the switch) introduces the risk of a service disruption
and hence should be avoided. HPE 3PAR Persistent Ports technology allows for a non-disruptive environment (from the host multi-pathing point
of view) where host-based multi-pathing software will not be involved in maintaining server connectivity to the storage during firmware
upgrades, in the event of a node failure, in response to an array port being taken offline administratively, or as the result of a hardware failure in
the SAN fabric that results in the storage array losing physical connectivity to the fabric.
Persistent Ports technology does not negate the need for properly installed, configured, and maintained host multi-pathing software. Although
Persistent Ports technology will isolate a server from the need for path failover during firmware upgrades, in the event of a fabric hardware failure
resulting in a “loss_sync” or in the event a node becomes available due to a panic or loss of power, it will not protect from cable problems or host
HBA failures that do not result in a “loss_sync” on the storage array node. A properly configured multi-pathing environment provides protection
from these events.
Features and benefits
HPE 3PAR Persistent Ports functionality works for the following transport layers:
• FC
• FCoE
• iSCSI
HPE 3PAR Persistent Ports functionality provides transparent and uninterrupted failover in response to the following events:
• HPE 3PAR OS firmware upgrade
• Node maintenance that requires the node to be taken offline (e.g., adding a new HBA)
• HPE 3PAR node failure
• HPE 3PAR array “loss_sync” to the FC fabric
• HPE 3PAR array “loss_sync” to an iSCSI network
• Array host ports being taken offline administratively
Technical white paper
Page 3
In most cases, no configuration changes are required on the servers in an environment using the HPE 3PAR StoreServ system for Persistent
Ports functionality to be leveraged. All configuration work is done on the HPE 3PAR StoreServ and SAN fabric.
What’s new with Persistent Ports on HPE 3PAR?
Persistent Ports support in HPE 3PAR OS 3.1.2 was limited to FC connectivity only, and it could only respond to a node being taken offline for
firmware upgrades, node failure, or HPE 3PAR OS CLI controlport command. Starting with HPE 3PAR OS 3.1.3 Persistent Ports was extended to
include FCoE and iSCSI connections but loss sync was not supported for these connections.
Starting with HPE 3PAR OS 3.2.2, the ability to detect and respond to a “loss_sync” event has been extended to iSCSI connections. The ability to
detect “loss_sync” is still not supported for FCoE connections.
Note
Starting with HPE 3PAR OS 3.2.2, Persistent Ports has the ability to detect and recover from a “loss_sync” event for an iSCSI connection to
the array.
Note
Even with HPE 3PAR OS 3.2.2 Persistent Ports does not recover from “loss_sync” events for FCoE connections to the array.
What does it do?
With HPE 3PAR Persistent Ports technology, each host facing port (FC, FCoE, and iSCSI) on an HPE 3PAR StoreServ node has both a “Native”
and a “Guest” identity associated with it. The “Guest” identity on a node corresponds to a “Native” port on the partner HPE 3PAR node in a node
pair. To provide greater resiliency and to avoid dependency on host multi-pathing software, HPE 3PAR Persistent Ports technology transparently
configures the network to redirect I/O requests from the “Native” port on one node to the “Guest” port on another node in response to firmware
upgrades, node failure, “loss_sync” events, or administrative requests. This redirection is done in a manner that is transparent to high-level host
software sending I/O requests to the node being upgraded and hence removes the need for host-level multi-pathing software to respond to
the event.
It is possible that in some cases, certain OSs may log errors as a result of loss_sync or firmware upgrades that results from perturbations on the
SAN fabric, but these perturbations do not result in the multi-pathing software having to intervene resulting in a potential interruption to
high-level host I/Os.
For FC and FCoE connections, Persistent Ports technology leverages SAN fabric NPIV functionality for transparent migration of a server’s
“Native” port Worldwide Name (WWN) to a “Guest” port on the other node in the node pair. For iSCSI connections, it migrates a port’s IP address
from the Native port on one node and the Guest port on the other node in a node pair (This IP address migration is followed by an unsolicited
Address Resolution Protocol [ARP] to force the updating of IP to MAC mappings on the network.)
Technical white paper
Page 4
Requirements
Persistent Ports functionality supports HPE 3PAR OS 3.1.2 and later. Starting with HPE 3PAR OS 3.1.3, support for FCoE-connected hosts and
iSCSI-connected hosts is added with the ability to detect an array node suffering “loss_sync” (a physical layer problem occurring between the
HPE 3PAR node and the switch it is connected to) for Fibre Channel connected hosts but not supported for either FCoE connections or iSCSI
connections. Starting with HPE 3PAR OS 3.2.2, the ability to detect and recover from “loss_sync” resulting from a cable pull for iSCSI-connected
hosts is added. There is still no ability to detect and recover from a “loss_sync” for FCoE connected hosts. There is no Persistent Ports support
on pre-HPE 3PAR OS 3.1.2 releases.
Note
HPE 3PAR Persistent Ports functionality does not detect and recover from a “loss_sync” event for FCoE connected hosts.
For HPE 3PAR StoreServ FC host ports, the following requirements must be met:
• The same host port on host facing HBAs in the nodes in a node pair must be connected to the same FC fabric and preferably different
FC switches on the fabric (for example, 0:1:1 and 1:1:1).
• The host facing HBAs must be set to “target” mode.
• The host facing HBAs must be configured for point-to-point connection (no support for “loop”).
• The FC fabric being used must support NPIV and have NPIV enabled.
For HPE 3PAR StoreServ FCoE host ports, the following requirements must be met:
• The same Converged Network Adapter (CNA) port on host facing HBAs in the nodes in a node pair must be connected to the same FCoE
network and preferably different FCoE switches on the network (for example, 0:1:1 and 1:1:1).
• The host facing CNAs must be set to “target” mode.
• The FCoE network being used must support NPIV and have NPIV enabled.
For HPE 3PAR StoreServ iSCSI host ports, the following requirements must be met:
• The same host port on host facing CNAs in the nodes in a node pair must be connected to the same IP network and preferably different
IP switches on the fabric (for example, 0:1:1 and 1:1:1).
There are no special requirements placed on the server HBAs and CNAs other than that they must be supported as per Single Point of
Connectivity Knowledge (SPOCK). See hp.com/storage/spock.
Configuration considerations
Persistent Ports requires that corresponding “Native” and “Guest” host ports on a node pair be connected to the same FC fabric or IP network,
and the switches they are connected to must support and be configured for NPIV in the case of FC and FCoE. This means that for a minimum
configuration to provide Persistent Ports functionality, where the node pair is connected to redundant FC SAN fabrics, each node in a node pair
must have at least two FC host ports cabled with one port connected to each fabric. Figure 1 shows a minimum two-fabric redundant
configuration that supports Persistent Ports for FC and figure 2 shows a configuration that will not support Persistent Ports functionality. Note
that the same slot:port on each node in the node pair are connected to the same FC fabric.
Technical white paper
Page 5
Figure 1. Minimum configuration—both the “Native” and “Guest” ports must be connected to the same FC fabric, FCoE network, or
iSCSI network.
In minimum configurations, HPE 3PAR StoreServ 7000 arrays that only have default FC port configurations (two ports) may not have enough
host ports to enable Persistent Ports in a dual fabric (figure 2) if ports are used for HPE 3PAR Remote Copy software or HPE 3PAR Peer Motion
software. For these configurations, the optional four port FC HBA is required to provide enough FC host ports per node to allow Persistent Ports.
Technical white paper
Page 6
Figure 2. Minimum configuration—both the “Native” and “Guest” ports must be connected to the same FC fabric, FCoE network, or
iSCSI network.
A best practice configuration will utilize a network mesh architecture where the “Native” and “Guest” ports in a Persistent Ports pair are
connected to different switches in the mesh (This is the case whether the connections are FC, FCoE, or iSCSI.). This allows the “loss_sync”
functionality of Persistent Ports to recover from a switch failure in an FC-based configuration. See figure 3 for a best practice Persistent Ports
configuration using FC that will protect from a node “loss_sync” event in the event of a switch failure.
Figure 3. Example of a best practice Persistent Ports configuration showing an FC mesh.
Technical white paper
Page 7
How does it work?
With Persistent Ports, FC, FCoE, or iSCSI host ports on an HPE 3PAR StoreServ node pair have both a “Native” and a “Guest” identity associated
with them. By having a Native and a Guest identity associated with its host ports, a node can be the backup for its partner node should that
node fail or lose connectivity to its network. A host port on a node can only have a Native and Guest identity if both of it and its partner node’s
corresponding host port (the same port on an HBA or CAN in the same slot) are connected to a common SAN fabric, FCoE network, or
iSCSI subnet.
For FC and FCoE, the Native identity for a port is the port WWN that port has on a given node. For example, 20110002AC0008EB for the HBA in
Node 0, Slot 1, Port 2. Node 0’s partner node, Node 1, has a Native port with a similar identity, 21110002AC0008EB, for the HBA in Node 1, Slot 1,
Port 2 (see figure 4). So, for this example the “Native” and “Guest” identities for nodes 0 and 1 would be:
Node 0—20110002AC0008EB (Native) and 21110002AC0008EB (Guest)
Node 1—21110002AC0008EB (Native) and 20110002AC0008EB (Guest)
Figure 4. Native and Guest identity on host ports.
It’s a little different with iSCSI since iSCSI does not use WWNs. In an iSCSI configuration, the “Native” and “Guest” port WWNs do not move; the IP
address configured on the Native port moves from the Native port to the Guest port, and an unsolicited ARP is transmitted to notify the network
for the change in the IP to MAC address mapping (see figure 5).
Technical white paper
Page 8
Figure 5. Native and Guest IP identity on CNA ports.
With Persistent Ports, during a firmware upgrade, a node failure, if a node fails, or in the case of FC loses connectivity to a fabric due to a
“loss_sync”, the “Native” identity of a port on one node in a node pair becomes the “Guest” identity on the corresponding port on that node’s
partner in response to a failure or CLI command. See figure 6 for an example of Persistent Ports failover as the result of a node failure.
Figure 6. If a node fails its partner node’s port, “Guest” identity takes over.
In the event of a failure or a planned activity on Node 0 in figure 6, the Guest port on Node 1 logs on to the fabric using the Guest identity for
port 0:1:1. This happens in a few seconds and is not visible at the SCSI layer.
Technical white paper
Page 9
When a Persistent Port failover occurs, the FailoverState of the host ports can be one of the following states:
• none: Failover not in operation.
• failover_pending: Failover to partner request has been issued but not yet completed (transient state).
• failed_over: This port is failed over to its partner.
• failback_pending: Failback request has been issued but not yet completed (transient state).
• active: The partner port is failed over to this port.
• active_down: The partner port failed over to this port, but this port is down, e.g., cable missing.
• active_failed: The partner port failed over to this port, but the action failed, e.g., FC switch did not have NPIV enabled.
For example, with 0:1:1 failing over to 1:1:1, port 0:1:1 has the FailoverState of failover_pending:
Test_sys cli% showport 0:1:1 1:1:1
N:S:P
Mode
State
----Node_WWN----
-Port_WWN/HW_Addr-
Type
Protocol Label
Partner FailoverState
0:1:1
target
ready
2FF70002AC0008EB
20110002AC0008EB
host
FC
-
1:1:1
failover_pending
1:1:1
target
ready
2FF70002AC0008EB
21110002AC0008EB
host
FC
-
0:1:1
active
Once the failover is complete, observe that 0:1:1 has the FailoverState as failed_over and 1:1:1 has the FailoverState of active:
Test_sys cli% showport 0:1:1 1:1:1
N:S:P
Mode
State
----Node_WWN----
-Port_WWN/HW_Addr-
Type
Protocol Label
Partner FailoverState
0:1:1
target
ready
2FF70002AC0008EB
20110002AC0008EB
host
FC
-
1:1:1
failed_over
1:1:1
target
ready
2FF70002AC0008EB
21110002AC0008EB
host
FC
-
0:1:1
active
If the path fails on 1:1:1, observe that 1:1:1 has the FailoverState as active_down (This is the case where host port 1:1:1 on Node 1 is active for both
the Native and Guest identities but fails.):
Test_sys cli% showport 0:1:1 1:1:1
N:S:P
Mode
State
----Node_WWN----
-Port_WWN/HW_Addr-
Type
Protocol Label
Partner FailoverState
0:1:1
target
ready
2FF70002AC0008EB
20110002AC0008EB
host
FC
-
1:1:1
failed_over
1:1:1
target
ready
2FF70002AC0008EB
21110002AC0008EB
host
FC
-
0:1:1
active_down
Setting the FC switches for NPIV
The FC fabric and the HPE 3PAR StoreServ node pair is connected to support NPIV and be configured for NPIV.
On Brocade, use the portCfgNPIVPort command to set NPIV on the SAN switch the HPE 3PAR StoreServ is connected to:
portCfgNPIVPort
Enables or disables NPIV functionality on a port and sets the per-port login limit.
Synopsis
portcfgnpivport --enable [slot/]port
portcfgnpivport --disable [slot/]port
portcfgnpivport --setloginlimit [slot/]port login_limit
Technical white paper
Page 10
On Cisco, use the following steps to enable or disable NPIV on the switch when the HPE 3PAR StoreServ is connected:
COMMAND
PURPOSE
switch# config t
Enters configuration mode
switch(config)# feature npiv
Enables NPIV for all VSANs on the switch
switch(config)# no feature npiv
Disables (default) NPIV on the switch
Best practice, zoning, and multi-pathing considerations for FC and FCoE
Following the currently published best practices for zoning HPE 3PAR arrays will enable Persistent Ports functionality. Be aware that some
previous best practices did not require server HBAs to have a physical path to either nodes or the same slot:port in a node pair if it did have
multiple paths. Also, previous best practices only required that a server HBA be connected to a single HPE 3PAR StoreServ node. As we have
seen, for Persistent Ports to work, a server HBA must have a physical path (be cabled) to both the “Native” port and a path to the “Guest” port on
a node pair (see figure 1), which means it must be cabled to both nodes in the node pair.
When setting up zoning, it is not necessary for the HBA port to be zoned to both the Native and Guest port on the array if you are using WWN
zoning because Persistent Ports will move one port’s WWN to the port on the other node in the event of a failure. If you are using port-based
zoning, the Native identity from both ports in the Persistent Ports pair must be zoned to the server HBA; however, they may be in the same or
different zone. If you do zone the server HBA to both ports in a Persistent Ports pair, then load-balancing software can be used to load balance
host I/Os across the Native identity ports on the two nodes. See figure 7 for an example of zoning to both nodes for Persistent Ports support that
will allow load balancing across the nodes. See figure 8 for an example of zoning to only one node, which is supported, but will not allow a single
server’s HBA to I/O load balance across the two Native identity ports on a node pair.
Once again, the server HBA can use load-balancing algorithms across the nodes if it is zoned to both Native identity ports in a
Persistent Ports pair.
Figure 7. Host HBA zoning example for Persistent Ports support that can support load balancing across both pairs of Native
identity ports (port pair 0:1:1 and 1:1:1 and port pair 0:1:2 and 1:1:2).
Technical white paper
Page 11
Figure 8. Host HBA zoning example for Persistent Ports support that cannot support load balancing across the Native identity
ports (port pair 0:1:1 and 1:1:1 and port pair 0:1:2 and 1:1:2).
If an existing HPE 3PAR StoreServ is being upgraded to HPE 3PAR OS 3.1.3 or later and Persistent Ports functionality is desired, check the
cabling of the existing servers connected to the array and help ensure that common HBA slot:port pairs on partner nodes are connected to the
same fabric. If they aren’t, then zoning changes and or cabling changes on the SAN fabric will be required.
Warning
SAN cabling changes and SAN zoning changes may require configuration changes on the server to help ensure the server’s ability to access its
logical unit numbers (LUNs) on the storage.
Firmware upgrades with Persistent Ports
Firmware upgrade on the HPE 3PAR StoreServ involves rebooting one controller node at a time. With Persistent Ports, when a node reboots as
part of the upgrade process, its partner node logs in to the FC fabric using the “Guest” identity. This all occurs in a manner that is transparent to
the host using the HPE StoreServ array, so no multi-pathing failover has to occur as a result. When the node being upgraded comes back online,
it logs in to the fabric using its original Native port identity and all the servers’ I/O requests are then transparently redirected back to that node.
Using Matched Set VLUN templates with Persistent Ports
If you are using “Matched Set” VLUN templates to export VVs, the VLUNs are only exported on the physical array ports that have exports
associated to them. This restriction can interfere with the mechanism of Persistent Ports. When using Matched Set VLUN templates for exports
and Persistent Ports together, you must enable the “FailoverMatchedSet” system configuration parameter using the CLI. Execute the following
CLI command to enable this parameter:
# setsys FailoverMatchedSet yes
The default for this setting is disabled or “no.”
Technical white paper
Page 12
HPE Virtual Connect considerations with Persistent Ports
The HPE Virtual Connect (VC) technology simplifies the networking configuration for the server administrator using an HPE BladeSystem
c-Class environment. The HPE VC technology virtualizes the connections between the server and the LAN and SAN network infrastructure.
VC firmware 3.70 or later supports two VC SAN configurations, Fabric Attach and Direct Attach. With Direct Attach FC support, users can now
connect HPE 3PAR Storage systems directly to the HPE VC FlexFabric module with no need for an expensive intermediate SAN fabric. In
addition to being much more cost-efficient, management of your storage solution is made easier. Valuable IT resources are freed up, along with
reduced costs.
The VC Direct Attach auto-configured zoning (aka Implicit Zoning) allows servers connected to a Direct-Attach Fabric access to storage devices
connected to the uplinks within that Direct-Attach Fabric. There is no manual zoning that is configuration required. The zoning is automatically
configured based on the VC fabric definitions and profile SAN connectivity selections.
When a VC Direct-Attach Fabric is using multiple uplinks (figure 6), hosts will see as many paths to the storage as there are uplinks to the HPE
3PAR StoreServ array. The concept of login balancing or login redistribution is not applicable in this case as these concepts are only provided on
uplinks within a VC Fabric-Attach Fabric.
Persistent Ports requires that corresponding “Native” and “Guest” host ports on a node pair be connected to the same FC fabric. This means that
for a minimum configuration to provide Persistent Ports functionality where the node pair is connected to redundant VC SAN fabrics, each node
in a node pair must have at least two FC host ports configured with one port connected to each fabric. The figure below shows configuration that
supports Persistent Ports configuration functionality.
Uplinks from a VC module in one bay should be connected to the same port pair (Native and Guest) in a pair nodes’ node.
Figure 9. StoreServ host port connections to HPE VC FlexFabric modules.
Technical white paper
Page 13
A maximum of four uplinks per VC are available, and as a result, a total of eight uplinks from both VCs in a blade chassis are available to be
connected to the array. The host will see as many paths as the number of the uplinks from the VC connect to the HPE 3PAR StoreServ array.
The user can create multiple VC SANs to isolate servers and in this case isolate the server’s workload (figure 10).
Figure 10. Using multiple VC SANs to isolate servers.
When connecting more than one HPE 3PAR StoreServ array, it is recommended to create multiple VC SANs to isolate the traffic of the arrays.
Technical white paper
Page 14
Figure 11. Using multiple VC SANs to isolate multiple arrays sharing a pair of HPE VC FlexFabric modules.
When connecting multiple HPE 3PAR StoreServ arrays, the user should consider whether a single pair of VC FlexFabric modules is sufficient or
whether they should consider adding multiple VC FlexFabric module pairs. Please note that the maximum number of HPE 3PAR StoreServ arrays
supported by a VC FlexFabric in a Direct Attach mode is four HPE 3PAR StoreServ arrays.
Per the HPE VC user and installation guide, it is recommended that all VC domains connected to the same HPE 3PAR Storage system use
different HPE predefined ranges of WWN addresses. This will help avoid storage networking issues and potential loss of data associated with
duplicate WWNs on the HPE 3PAR StoreServ array system.
For more information, see the HPE Virtual Connect FlexFabric Cookbook.
Learn more at
hp.com/go/3PARStoreServ
Sign up for updates
Rate this document
© Copyright 2012–2015 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without
notice. The only warranties for HPE products and services are set forth in the express warranty statements accompanying such products
and services. Nothing herein should be construed as constituting an additional warranty. HPE shall not be liable for technical or editorial
errors or omissions contained herein.
4AA4-4545ENW, November 2015, Rev. 5