Download iSCSI SAN Topologies Version 3.2 Jonghoon (Jason) Jeong

Document related concepts

Server Message Block wikipedia , lookup

AppleTalk wikipedia , lookup

Airborne Networking wikipedia , lookup

Distributed firewall wikipedia , lookup

Lag wikipedia , lookup

Network tap wikipedia , lookup

Wake-on-LAN wikipedia , lookup

Remote Desktop Services wikipedia , lookup

Internet protocol suite wikipedia , lookup

Parallel port wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

TCP congestion control wikipedia , lookup

Storage virtualization wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Zero-configuration networking wikipedia , lookup

Transcript
iSCSI SAN Topologies
Version 3.2
• iSCSI SAN Topology Overview
• TCP/IP and iSCSI Overview
• Use Case Scenarios
Jonghoon (Jason) Jeong
Copyright © 2011 - 2015 EMC Corporation. All rights reserved.
EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS
PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR
FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United
State and other countries. All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulator document for your product line, go to EMC Online Support
(https://support.emc.com).
Part number H8080.6
2
iSCSI SAN Topologies TechBook
Contents
Preface............................................................................................................................ 11
Chapter 1
TCP/IP Technology
TCP/IP overview..............................................................................
Transmission Control Protocol ................................................
Internet Protocol ........................................................................
TCP terminology...............................................................................
TCP error recovery............................................................................
TCP network congestion..................................................................
IPv6 .....................................................................................................
Features of IPv6..........................................................................
Deployment status.....................................................................
Addressing..................................................................................
IPv6 packet..................................................................................
Transition mechanisms .............................................................
Internet Protocol security (IPsec)....................................................
Tunneling and IPsec ..................................................................
IPsec terminology ......................................................................
Chapter 2
18
18
20
21
25
28
29
29
31
32
37
38
40
40
41
iSCSI Technology
iSCSI technology overview..............................................................
iSCSI discovery..................................................................................
Static.............................................................................................
Send target ..................................................................................
iSNS..............................................................................................
iSCSI error recovery..........................................................................
iSCSI SAN Topologies TechBook
44
46
46
46
46
47
3
Contents
iSCSI security..................................................................................... 48
Security mechanisms................................................................. 48
Authentication methods ........................................................... 49
Chapter 3
iSCSI Solutions
Network design best practices........................................................
EMC native iSCSI targets.................................................................
Symmetrix...................................................................................
VNX for Block and CLARiiON................................................
Celerra Network Server............................................................
VNX series for File.....................................................................
Configuring iSCSI targets ................................................................
Bridged solutions..............................................................................
Brocade........................................................................................
Cisco ............................................................................................
Summary............................................................................................
Chapter 4
52
53
53
54
55
56
58
60
60
63
69
Use Case Scenarios
Connecting an iSCSI Windows host to a VMAX array ............... 72
Configuring storage port flags and an IP address on a
VMAX array ............................................................................... 72
Configuring LUN Masking on a VMAX array...................... 77
Configuring an IP address on a Windows host .................... 79
Configuring iSCSI on a Windows host................................... 81
Configuring Jumbo frames ...................................................... 97
Setting MTU on a Windows host ............................................ 97
Connecting an iSCSI Linux host to a VMAX array...................... 99
Configuring storage port flags and an IP address on a
VMAX array ............................................................................. 100
Configuring LUN Masking on a VMAX array.................... 107
Configuring an IP address on a Linux host ......................... 110
Configuring CHAP on the Linux host.................................. 113
Configuring iSCSI on a Linux host using Linux iSCSI
Initiator CLI .............................................................................. 113
Configuring Jumbo frames .................................................... 115
Setting MTU on a Linux host................................................. 115
Configuring the VNX for block 1 Gb/10 Gb iSCSI port ........... 117
Prerequisites ............................................................................. 117
Configuring storage system iSCSI front-end ports ............ 118
Assigning an IP address to each NIC or iSCSI HBA in a
Windows Server 2008 ............................................................. 123
4
iSCSI SAN Topologies TechBook
Contents
Configuring iSCSI initiators for a configuration
without iSNS............................................................................. 126
Registering the server with the storage system................... 142
Setting storage system failover values for the server
initiators with Unisphere ........................................................ 144
Configuring the storage group .............................................. 159
iSCSI CHAP authentication.................................................... 172
Connecting an iSCSI Windows host to an XtremIO array ........ 173
Prerequisites ............................................................................. 173
Configuring storage system iSCSI portal ............................. 174
Assigning an IP address to each NIC or iSCSI HBA in a
Windows Server 2008 .............................................................. 176
Configuring iSCSI initiator on a Windows host.................. 178
Configuring LUN masking on an XtremIO array ............... 184
Detecting the iSCSI LUNs from Windows host................... 189
iSCSI SAN Topologies TechBook
5
Contents
6
iSCSI SAN Topologies TechBook
Figures
Title
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Page
TCP header example ......................................................................................
TCP header fields, size, and functions ........................................................
Slow start and congestion avoidance ..........................................................
Fast retransmit ................................................................................................
IPv6 packet header structure ........................................................................
iSCSI example .................................................................................................
iSCSI header example ....................................................................................
iSCSI header fields, size, and functions ......................................................
Celerra iSCSI configurations .........................................................................
VNX 5000 series iSCSI configuration ..........................................................
VNX VG2 iSCSI configuration .....................................................................
iSCSI gateway service basic implementation .............................................
Supportable configuration example ............................................................
Windows host connected to a VMAX array with 1 G connectivity ........
EMC Symmetrix Manager Console, Directors ...........................................
Set Port Attributes dialog box ......................................................................
Config Session tab ..........................................................................................
My Active Tasks, Commit All ......................................................................
EMC Symmetrix Management Console, Storage Provisioning ...............
Internet Protocol Version 6 (TCP/IPv6) Properties dialog box ...............
Test connectivity .............................................................................................
iSCSI Initiator Properties window ...............................................................
Discovery tab, Discover Portal .....................................................................
Discover Portal dialog box ............................................................................
Advanced Settings window ..........................................................................
Target portals ..................................................................................................
Targets tab .......................................................................................................
Connect to Target dialog box ........................................................................
Discovered targets ..........................................................................................
Volume and Devices tab ................................................................................
iSCSI SAN Topologies TechBook
19
19
26
27
37
44
45
45
55
56
57
60
64
72
73
74
75
75
77
80
80
82
83
84
85
86
86
87
87
88
7
Figures
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
8
Devices ............................................................................................................. 89
iSNS Server Properties window, storage ports .......................................... 90
Discovery tab .................................................................................................. 91
iSNS Server added ......................................................................................... 92
iSNS Server ...................................................................................................... 93
Linux hosts connected to a VMAX array with 10 G connectivity ........... 99
Set port attributes ......................................................................................... 101
Set Port Attributes dialog box .................................................................... 102
Config Session tab ........................................................................................ 103
My Active Tasks, Commit All .................................................................... 104
CHAP authentication .................................................................................. 105
Director Port CHAP Authentication Enable/Disable dialog box ......... 105
Director Port CHAP Authentication Set dialog box ............................... 106
EMC Symmetrix Management Console, Storage Provisioning ............ 108
Verify IP addresses ...................................................................................... 111
Test connectivity ........................................................................................... 113
Windows host connected to a VNX array with 1 G/ 10 G
connectivity..................................................................................................... 117
Unisphere, System tab ................................................................................. 119
Message box .................................................................................................. 120
iSCSI Port Properties window .................................................................... 121
iSCSI Virtual Port Properties window ...................................................... 122
Warning message ......................................................................................... 123
Successful message ...................................................................................... 123
Control Panel, Network Connections window ....................................... 124
Local Area Connection Properties dialog box ......................................... 125
Internet Protocol Version 4 (TCP/IPv4) Properties dialog box ............ 126
EMC Unisphere Server Utility welcome window ................................... 128
EMC Unisphere Server Utility window, Configure iSCSI
Connections.................................................................................................... 129
iSCSI Targets and Connections window .................................................. 130
Discover iSCSI targets on this subnet ....................................................... 131
Discover iSCSI targets for this target portal ............................................. 132
iSCSI Targets window ................................................................................. 133
Successful logon message ........................................................................... 134
Server registration window ........................................................................ 135
Successfully updated message ................................................................... 136
Microsoft iSCSI Initiator Properties dialog box ....................................... 137
Discovery tab ................................................................................................ 137
Add Target Portal dialog box ..................................................................... 138
Advanced Settings dialog box, General tab ............................................. 138
iSCSI Initiator Properties dialog box, Discovery tab .............................. 139
iSCSI Initiator Properties dialog box, Targets tab ................................... 140
iSCSI SAN Topologies TechBook
Figures
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
Log on to Target dialog box ........................................................................ 140
Target, Connected ......................................................................................... 141
EMC Unisphere Server Utility, welcome window .................................. 142
Connected Storage Systems ........................................................................ 143
Successfully updated message .................................................................... 144
EMC Unisphere, Hosts tab .......................................................................... 145
Start Wizard dialog box ............................................................................... 146
Select Host dialog box .................................................................................. 147
Select Storage System dialog box ............................................................... 148
Specify Settings dialog box .......................................................................... 149
Review and Commit Settings ..................................................................... 151
Failover Setup Wizard Confirmation dialog box ..................................... 152
Details from Operation dialog box ............................................................ 153
EMC Unisphere, Hosts tab .......................................................................... 154
Connectivity Status Window, Host Initiators tab .................................... 154
Expanded hosts ............................................................................................. 155
Edit Initiators window ................................................................................. 155
Confirmation dialog box .............................................................................. 157
Success confirmation message .................................................................... 157
Connectivity Status window, Host Initiators tab ..................................... 158
Initiator Information window ..................................................................... 158
Select system .................................................................................................. 159
Select Storage Groups .................................................................................. 160
Storage Groups window .............................................................................. 161
Create Storage dialog box ............................................................................ 161
Confirmation dialog box .............................................................................. 162
Storage Group, Properties ........................................................................... 163
Hosts tab ........................................................................................................ 163
Hosts to be Connected column .................................................................. 164
Connect LUNs ............................................................................................... 165
LUNs tab ........................................................................................................ 166
Selected LUNs ............................................................................................... 167
Confirmation dialog box .............................................................................. 167
Success message box .................................................................................... 168
Added LUNs ................................................................................................. 168
Computer Management window ............................................................... 169
Rescanned disks ............................................................................................ 170
PowerPath icon ............................................................................................. 170
EMC PowerPath Console screen ................................................................ 171
Disks ............................................................................................................... 171
Windows host connected to an XtremeIO array ...................................... 173
XtremIO iSCSI port locations ...................................................................... 174
iSCSI Network Configuration window ..................................................... 175
iSCSI SAN Topologies TechBook
9
Figures
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
10
Edit X1-N1-iscsi1 iSCSI Portal dialog box ................................................
Control Panel, Network Connections window .......................................
Local Area Connection Properties dialog box .........................................
Internet Protocol Version 4 (TCP/IPv4) Properties dialog box ............
iSCSI Initiator Properties window .............................................................
Discovery tab ................................................................................................
Discover Target Portal dialog box .............................................................
Targets display .............................................................................................
Targets tab .....................................................................................................
Connect to Target dialog box .....................................................................
Host connected to targets ............................................................................
Second iSCSI target ......................................................................................
Main menu ....................................................................................................
Add New Volumes screen ..........................................................................
New folder dialog box .................................................................................
Add New Volumes screen ..........................................................................
Configuration window ................................................................................
Add Initiator Group window .....................................................................
Add Initiator dialog box ..............................................................................
Initiator Groups displayed .........................................................................
LUN Mapping Configuration window .....................................................
iSCSI Initiator Properties window .............................................................
EMC PowerPath Console ............................................................................
iSCSI SAN Topologies TechBook
175
176
177
178
179
180
180
181
182
182
183
184
184
185
186
186
187
187
188
188
189
190
191
Preface
This EMC Engineering TechBook provides a high-level overview of iSCSI
SAN topologies and includes basic information about TCP/IP technologies
and iSCSI solutions.
E-Lab would like to thank all the contributors to this document, including
EMC engineers, EMC field personnel, and partners. Your contributions are
invaluable.
As part of an effort to improve and enhance the performance and capabilities
of its product lines, EMC periodically releases revisions of its hardware and
software. Therefore, some functions described in this document may not be
supported by all versions of the software or hardware currently in use. For
the most up-to-date information on product features, refer to your product
release notes. If a product does not function properly or does not function as
described in this document, please contact your EMC representative.
Audience
EMC Support Matrix
and E-Lab
Interoperability
Navigator
This TechBook is intended for EMC field personnel, including
technology consultants, and for the storage architect, administrator,
and operator involved in acquiring, managing, operating, or
designing a networked storage environment that contains EMC and
host devices.
For the most up-to-date information, always consult the EMC Support
Matrix (ESM), available through E-Lab Interoperability Navigator
(ELN) at http://elabnavigator.EMC.com.
iSCSI SAN Topologies TechBook
11
Preface
Related
documentation
Related documents include:
◆
The following documents, including this one, are available
through the E-Lab Interoperability Navigator at
http://elabnavigator.EMC.com.
These documents are also available at the following location:
http://www.emc.com/products/interoperability/topology-resource-center.htm
•
•
•
•
•
•
•
•
•
•
•
•
Backup and Recovery in a SAN TechBook
Building Secure SANs TechBook
Extended Distance Technologies TechBook
Fibre Channel over Ethernet (FCoE): Data Center Bridging (DCB)
Concepts and Protocols TechBook
Fibre Channel over Ethernet (FCoE): Data Center Bridging (DCB)
Case Studies TechBook
Fibre Channel SAN Topologies TechBook
Networked Storage Concepts and Protocols TechBook
Networking for Storage Virtualization and RecoverPoint TechBook
WAN Optimization Controller Technologies TechBook
EMC Connectrix SAN Products Data Reference Manual
Legacy SAN Technologies Reference Manual
Non-EMC SAN Products Data Reference Manual
◆
EMC Support Matrix, available through E-Lab Interoperability
Navigator at http://elabnavigator.EMC.com
◆
RSA security solutions documentation, which can be found at
http://RSA.com > Content Library
All of the following documentation and release notes can be found at
EMC Online Support at https://support.emc.com.
EMC hardware documents and release notes include those on:
◆
◆
◆
◆
◆
◆
◆
Connectrix B series
Connectrix MDS (release notes only)
VNX series
CLARiiON
Celerra
Symmetrix
VMAX
EMC software documents include those on:
◆
◆
◆
12
iSCSI SAN Topologies TechBook
RecoverPoint
TimeFinder
PowerPath
Preface
The following E-Lab documentation is also available:
◆
◆
Host Connectivity Guides
HBA Guides
For Cisco and Brocade documentation, refer to the vendor’s website.
Authors of this
TechBook
◆
http://cisco.com
◆
http://brocade.com
This TechBook was authored by Ron Dharma, Vinay Jonnakuti, and
Jonghoon (Jason) Jeong , with contributions from EMC engineers,
EMC field personnel, and partners.
Jonghoon (Jason) Jeong is a Systems Integration Engineer and has
been with EMC for over 6 years. Jonghoon works in E-Lab qualifying
new CLARiiON/VNX, Invista, and PowerPath Migration Enabler
releases.
Conventions used in
this document
EMC uses the following conventions for special notices:
IMPORTANT
An important notice contains information essential to software or
hardware operation.
Note: A note presents information that is important, but not hazard-related.
Typographical conventions
EMC uses the following type style conventions in this document.
Normal
Italic
Used in running (nonprocedural) text for:
• Names of interface elements (such as names of windows, dialog
boxes, buttons, fields, and menus)
• Names of resources, attributes, pools, Boolean expressions, buttons,
DQL statements, keywords, clauses, environment variables,
functions, utilities
• URLs, pathnames, filenames, directory names, computer names,
filenames, links, groups, service keys, file systems, notifications
Used in all text (including procedures) for:
• Full titles of publications referenced in text
• Emphasis (for example a new term)
• Variables
iSCSI SAN Topologies TechBook
13
Preface
Bold
Used in running (nonprocedural) text for:
• Names of commands, daemons, options, programs, processes,
services, applications, utilities, kernels, notifications, system calls,
man pages
Used in procedures for:
• Names of interface elements (such as names of windows, dialog
boxes, buttons, fields, and menus)
• What user specifically selects, clicks, presses, or types
Courier
Used for:
• System output, such as an error message or script
• URLs, complete paths, filenames, prompts, and syntax when shown
outside of running text
Courier bold Used for:
• Specific user input (such as commands)
Courier
Used in procedures for:
italic
• Variables on command line
• User input variables
Angle brackets enclose parameter or variable values supplied by the
<>
user
Square brackets enclose optional values
[]
|
{}
...
Where to get help
Vertical bar indicates alternate selections - the bar means “or”
Braces indicate content that you must specify (that is, x or y or z)
Ellipses indicate nonessential information omitted from the example
EMC support, product, and licensing information can be obtained on
the EMC Online Support site as described next.
Note: To open a service request through the EMC Online Support site, you
must have a valid support agreement. Contact your EMC sales representative
for details about obtaining a valid support agreement or to answer any
questions about your account.
Product information
For documentation, release notes, software updates, or for
information about EMC products, licensing, and service, go to the
EMC Online Support site (registration required) at:
https://support.EMC.com
Technical support
EMC offers a variety of support options.
Support by Product — EMC offers consolidated, product-specific
information on the Web at:
14
iSCSI SAN Topologies TechBook
Preface
https://support.EMC.com/products
The Support by Product web pages offer quick links to
Documentation, White Papers, Advisories (such as frequently used
Knowledgebase articles), and Downloads, as well as more dynamic
content, such as presentations, discussion, relevant Customer
Support Forum entries, and a link to EMC Live Chat.
EMC Live Chat — Open a Chat or instant message session with an
EMC Support Engineer.
eLicensing support
To activate your entitlements and obtain your Symmetrix license files,
visit the Service Center on https://support.EMC.com, as directed on
your License Authorization Code (LAC) letter e-mailed to you.
For help with missing or incorrect entitlements after activation (that
is, expected functionality remains unavailable because it is not
licensed), contact your EMC Account Representative or Authorized
Reseller.
For help with any errors applying license files through Solutions
Enabler, contact the EMC Customer Support Center.
If you are missing a LAC letter, or require further instructions on
activating your licenses through the Online Support site, contact
EMC's worldwide Licensing team at [email protected] or call:
◆
North America, Latin America, APJK, Australia, New Zealand:
SVC4EMC (800-782-4362) and follow the voice prompts.
◆
EMEA: +353 (0) 21 4879862 and follow the voice prompts.
We'd like to hear from you!
Your suggestions will help us continue to improve the accuracy,
organization, and overall quality of the user publications. Send your
opinions of this document to:
[email protected]
Your feedback on our TechBooks is important to us! We want our
books to be as helpful and relevant as possible. Send us your
comments, opinions, and thoughts on this or any other TechBook to:
[email protected]
iSCSI SAN Topologies TechBook
15
Preface
16
iSCSI SAN Topologies TechBook
1
TCP/IP Technology
This chapter provides a brief overview of TCP/IP technology.
◆
◆
◆
◆
◆
◆
TCP/IP overview ...............................................................................
TCP terminology ................................................................................
TCP error recovery.............................................................................
TCP network congestion...................................................................
IPv6.......................................................................................................
Internet Protocol security (IPsec) .....................................................
TCP/IP Technology
18
21
25
28
29
40
17
TCP/IP Technology
TCP/IP overview
The Internet Protocol Suite is named from the first two networking
protocols defined in this standard, each briefly described in this
section:
◆
“Transmission Control Protocol” on page 18
◆
“Internet Protocol” on page 20
Transmission Control Protocol
The Transmission Control Protocol (TCP) provides a communication
service between an application program and the Internet Protocol
(IP). The entire suite is commonly referred to as TCP/IP. When an
application program wants to send a large chunk of data across the
Internet using IP, the software can issue a single request to TCP and
let TCP handle the IP details.
TCP is a connection-oriented transport protocol that guarantees
reliable in-order delivery of a stream of bytes between the endpoints
of a connection. TCP achieves this by assigning each byte of data a
unique sequence number by maintaining timers, acknowledging
received data through the use of acknowledgements (ACKs), and
retransmitting data if necessary.
Data can be transferred after a connection is established between the
endpoints. The data stream that passes across the connection is
considered a single sequence of eight-bit bytes, each of which is given
a sequence number.
TCP accepts data from a data stream, segments it into chunks, and
adds a TCP header. A TCP header follows the internet header,
supplying information specific to the TCP protocol. This division
allows for the existence of host-level protocols other than TCP.
Figure 1 on page 19 shows an example of a TCP header.
18
iSCSI SAN Topologies TechBook
TCP/IP Technology
Figure 1
TCP header example
Figure 2 on page 19 defines the fields, size, and functions of the TCP
header.
Figure 2
TCP header fields, size, and functions
TCP/IP overview
19
TCP/IP Technology
Internet Protocol
The Internet Protocol (IP) is the main communications protocol used
for relaying datagrams (packets) across an internetwork using the
Internet Protocol Suite. It is responsible for routing packets across
network boundaries.
20
iSCSI SAN Topologies TechBook
TCP/IP Technology
TCP terminology
This section provides information for TCP terminology.
Acknowledgements
(ACKs)
The TCP acknowledgement scheme is cumulative as it acknowledges
all the data received up until the time the ACK was generated. As
TCP segments are not of uniform size and a TCP sender may
retransmit more data than what was in a missing segment, ACKs do
not acknowledge the received segment, rather they mark the position
of the acknowledged data in the stream. The policy of cumulative
acknowledgement makes the generation of ACKs easy and any loss
of ACKs do not force the sender to retransmit data. The disadvantage
is that the sender does not receive any detailed information about the
data received except the position in the stream of the last byte that
has been received.
Delayed ACKs
Delayed ACKs allow a TCP receiver to refrain from sending an ACK
for each incoming segment. However, a receiver should send an ACK
for every second full-sized segment that arrives. Furthermore, the
standard mandates that a receiver must not withhold an ACK for
more than 500 ms. The receivers should not delay ACKs that
acknowledge out-of-order segments.
Maximum segment
size (MSS)
The maximum segment size (MSS) is the maximum amount of data,
specified in bytes, that can be transmitted in a segment between the
two TCP endpoints. The MSS is decided by the endpoints, as they
need to agree on the maximum segment they can handle. Deciding on
a good MSS is important in a general inter-networking environment
because this decision greatly affects performance. It is difficult to
choose a good MSS value since a very small MSS means an
underutilized network, whereas a very large MSS means large IP
datagrams that may lead to IP fragmentation, greatly hampering the
performance. An ideal MSS size would be when the IP datagrams are
as large as possible without any fragmentation anywhere along the
path from the source to the destination. When TCP sends a segment
with the SYN bit set during connection establishment, it can send an
optional MSS value up to the outgoing interface’s MTU minus the
size of the fixed TCP and IP headers. For example, if the MTU is 1500
(Ethernet standard), the sender can advertise a MSS of 1460 (1500
minus 40).
TCP terminology
21
TCP/IP Technology
Maximum
transmission unit
(MTU)
Each network interface has its own MTU that defines the largest
packet that it can transmit. The MTU of the media determines the
maximum size of the packets that can be transmitted without IP
fragmentation.
Retransmission
A TCP sender starts a timer when it sends a segment and expects an
acknowledgement for the data it sent. If the sender does not receive
an acknowledgement for the data before the timer expires, it assumes
that the data was lost or corrupted and retransmits the segment. Since
the time required for the data to reach the receiver and for the
acknowledgement to reach the sender is not constant (because of the
varying Internet delays), an adaptive retransmission algorithm is
used to monitor performance of each connection and conclude a
reasonable value for timeout based on the round trip time.
Selective
Acknowledgement
(SACK)
TCP may experience poor performance when multiple packets are
lost from one window of data. With the limited information available
from cumulative acknowledgements, a TCP sender can only learn
about a single lost packet per round trip time. An aggressive sender
could choose to retransmit packets early, but such retransmitted
segments may have already been successfully received. The Selective
Acknowledgement (SACK) mechanism, combined with a selective
repeat retransmission policy, helps to overcome these limitations. The
receiving TCP sends back SACK packets to the sender confirming
receipt of data and specifies the holes in the data that has been
received. The sender can then retransmit only the missing data
segments. The selective acknowledgment extension uses two TCP
options. The first is an enabling option, SACKpermitted, which may
be sent in a SYN segment to indicate that the SACK option can be
used once the connection is established. The other is the SACK
option itself, which may be sent over an established connection once
permission has been given by SACKpermitted.
TCP segment
The TCP segments are units of transfer for TCP and used to establish
a connection, transfer data, send ACKs, advertise window size, and
close a connection. Each segment is divided into three parts:
◆
Fixed header of 20 bytes
◆
Optional variable length header, padded out to a multiple of 4
bytes
◆
Data
The maximum possible header size is 60 bytes. The TCP header
carries the control information. SOURCE PORT and
22
iSCSI SAN Topologies TechBook
TCP/IP Technology
DESTINATION PORT contain TCP port numbers that identify the
application programs at the endpoints. The SEQUENCE NUMBER
field identifies the position in the sender’s byte stream of the first
byte of attached data, if any, and the ACKNOWLEDGEMENT
NUMBER field identifies the number of the byte the source expects
to receive next. The ACKNOWLEDGEMENT NUMBER field is
valid only if the ACK bit in the CODE BITS field is set. The 6-bit
CODE BITS field is used to determine the purpose and contents of
the segment. The HLEN field specifies the total length of the fixed
plus variable headers of the segment as a number of 32-bit words.
TCP software advertises how much data it is willing to receive by
specifying its buffer size in the WINDOW field. The CHECKSUM
field contains a 16-bit integer checksum used to verify the integrity of
the data as well as the TCP header and the header options. The TCP
header padding is used to ensure that the TCP header ends and data
begins on a 32-bit boundary. The padding is composed of zeros.
TCP window
A TCP window is the amount of data a sender can send without
waiting for an ACK from the receiver. The TCP window is a flow
control mechanism and ensures that no congestion occurs in the
network. For example, if a pair of hosts are talking over a TCP
connection that has a TCP window size of 64 KB, the sender can only
send 64 KB of data and it must stop and wait for an
acknowledgement from the receiver that some or all of the data has
been received. If the receiver acknowledges that all the data has been
received, the sender is free to send another 64 KB. If the sender gets
back an acknowledgement from the receiver that it received the first
32 KB (which is likely if the second 32 KB was still in transit or it is
lost), then the sender could only send another 32 KB since it cannot
have more than 64 KB of unacknowledged data outstanding (the
second 32 KB of data plus the third).
The primary reason for the window is congestion control. The whole
network connection, which consists of the hosts at both ends, the
routers in between, and the actual connections themselves, might
have a bottleneck somewhere that can only handle so much data so
fast. The TCP window throttles the transmission speed down to a
level where congestion and data loss do not occur.
The factors affecting the window size are as follows:
Receiver’s advertised window
The time taken by the receiver to process the received data and send
ACKs may be greater than the sender’s processing time, so it is
necessary to control the transmission rate of the sender to prevent it
TCP terminology
23
TCP/IP Technology
from sending more data than the receiver can handle, thus causing
packet loss. TCP introduces flow control by declaring a receive
window in each segment header.
Sender’s congestion window
The congestion window controls the number of packets a TCP flow
has in the network at any time. The congestion window is set using
an Additive-Increase, Multiplicative-Decrease (AIMD) mechanism
that probes for available bandwidth, dynamically adapting to
changing network conditions.
Usable window
This is the minimum of the receiver’s advertised window and the
sender’s congestion window. It is the actual amount of data that the
sender is able to transmit. The TCP header uses a 16-bit field to report
the receive window size to the sender. Therefore, the largest window
that can be used is 2**16 = 65 KB.
Window scaling
The ordinary TCP header allocates only 16 bits for window
advertisement. This limits the maximum window that can be
advertised to 64 KB, limiting the throughput. RFC 1323 provides the
window scaling option, to be able to advertise windows greater than
64 KB. Both the endpoints must agree to use window scaling during
connection establishment.
The window scale extension expands the definition of the TCP
window to 32 bits and then uses a scale factor to carry this 32-bit
value in the 16-bit Window field of the TCP header (SEG.WND in
RFC-793). The scale factor is carried in a new TCP option, Window
Scale. This option is sent only in a SYN segment (a segment with the
SYN bit on), hence the window scale is fixed in each direction when a
connection is opened.
24
iSCSI SAN Topologies TechBook
TCP/IP Technology
TCP error recovery
In TCP, each source determines how much capacity is available in the
network so it knows how many packets it can safely have in transit.
Once a given source has this many packets in transit, it uses the
arrival of an ACK as a signal that some of its packets have left the
network and it is therefore safe to insert new packets into the network
without adding to the level of congestion. TCP uses congestion
control algorithms to determine the network capacity. From the
congestion control point of view, a TCP connection is in one of the
following states.
◆
◆
◆
Slow start: After a connection is established and after a loss is
detected by a timeout or by duplicate ACKs.
Fast recovery: After a loss is detected by fast retransmit.
Congestion avoidance: In all other cases. Congestion avoidance
and slow start work hand-in-hand. The congestion avoidance
algorithm assumes that the chance of a packet being lost due to
damage is very small. Therefore, the loss of a packet means there
is congestion somewhere in the network between the source and
destination. Occurrence of a timeout and the receipt of duplicate
ACKs indicates packet loss.
When congestion is detected in the network it is necessary to slow
things down, so the slow start algorithm is invoked. Two parameters,
the congestion window (cwnd) and a slow start threshold (ssthresh),
are maintained for each connection. When a connection is
established, both of these parameters are initialized. The cwnd is
initialized to one MSS. The ssthresh is used to determine whether the
slow start or congestion avoidance algorithm is to be used to control
data transmission. The initial value of ssthresh may be arbitrarily
high (usually ssthresh is initialized to 65535 bytes), but it may be
reduced in response to congestion.
The slow start algorithm is used when cwnd is less than ssthresh,
while the congestion avoidance algorithm is used when cwnd is
greater than ssthresh. When cwnd and ssthresh are equal, the sender
may use either slow start or congestion avoidance.
TCP never transmits more than the minimum of cwnd and the
receiver’s advertised window. When a connection is established, or if
congestion is detected in the network, TCP is in slow start and the
congestion window is initialized to one MSS. Each time an ACK is
received, the congestion window is increased by one MSS. The sender
TCP error recovery
25
TCP/IP Technology
starts by transmitting one segment and waiting for its ACK. When
that ACK is received, the congestion window is incremented from
one to two, and two segments can be sent. When each of those two
segments is acknowledged, the congestion window is increased to
four, and so on. The window size increases exponentially during slow
start as shown in Figure 3. When a time-out occurs or a duplicate
ACK is received, ssthresh is reset to one half of the current window
(that is, the minimum of cwnd and the receiver's advertised
window). If the congestion was detected by an occurrence of a
timeout, the cwnd is set to one MSS.
When an ACK is received for data transmitted, the cwnd is increased.
However, the way it is increased depends on whether TCP is
performing slow start or congestion avoidance. If the cwnd is less
than or equal to the ssthresh, TCP is in slow start and slow start
continues until TCP is halfway to where it was when congestion
occurred, then congestion avoidance takes over. Congestion
avoidance increments the cwnd by MSS squared divided by cwnd (in
bytes) each time an ACK is received, increasing the cwnd linearly as
shown in Figure 3. This provides a close approximation to increasing
cwnd by, at most, one MSS per RTT.
Congestion avoidance: Linear
growth of cwnd
cwnd
ssthresh
Slow start: Exponential
growth of cwnd
RTT
Figure 3
26
Slow start and congestion avoidance
iSCSI SAN Topologies TechBook
SYM-001457
TCP/IP Technology
A TCP receiver generates ACKs on receipt of data segments. The
ACK contains the highest contiguous sequence number the receiver
expects to receive next. This informs the sender of the in-order data
that was received by the receiver. When the receiver receives a
segment with a sequence number greater than the sequence number
it expected to receive, it detects the out-of-order segment and
generates an immediate ACK with the last sequence number it has
received in-order (that is, a duplicate ACK). This duplicate ACK is
not delayed. Since the sender does not know if this duplicate ACK is
a result of a lost packet or an out-of-order delivery, it waits for a small
number of duplicate ACKs, assuming that if the packets are only
reordered there will be only one or two duplicate ACKs before the
reordered segment is received and processed and a new ACK is
generated. If three or more duplicate ACKs are received in a row, it
implies there has been a packet loss. At that point, the TCP sender
retransmits this segment without waiting for the retransmission timer
to expire. This is known as fast retransmit (Figure 4).
After fast retransmit has sent the supposedly missing segment, the
congestion avoidance algorithm is invoked instead of the slow start;
this is called fast recovery. Receipt of a duplicate ACK implies that not
only is a packet lost, but that there is data still flowing between the
two ends of TCP, as the receiver will only generate a duplicate ACK
on receipt of another segment. Hence, fast recovery allows high
throughput under moderate congestion.
23 lost in the network
Send segments 21 - 26
Receive ACK for 21
and 22
Received segment 21 and 22
send ACK for 21 and 22
expecting 23
Received 3 duplicate
ACKs expecting 23
Retransmit 23
Received 24 still expecting 23 send
a duplicate ACK
Received 25 still expecting 23 send
a duplecate ACK
Received ACK for 26
expecting 27
Received 26 still expecting 23 send
a duplicate ACK
GEN-000299
Figure 4
Fast retransmit
TCP error recovery
27
TCP/IP Technology
TCP network congestion
A network link is said to be congested if contention for it causes
queues to build up and packets start getting dropped. The TCP
protocol detects these dropped packets and starts retransmitting
them, but using aggressive retransmissions to compensate for packet
loss tends to keep systems in a state of network congestion even after
the initial load has been reduced to a level which would not normally
have induced network congestion. In this situation, demand for link
bandwidth (and eventually queue space), outstrips what is available.
When congestion occurs, all the flows that detect it must reduce their
transmission rate. If they do not do so, the network will remain in an
unstable state with queues continuing to build up.
28
iSCSI SAN Topologies TechBook
TCP/IP Technology
IPv6
Internet Protocol version 6 (IPv6) is a network layer protocol for
packet-switched internets. It is designated as the successor of IPv4.
Note: For the most up-to-date support information, always refer to the EMC
Support Matrix > PDF and Guides > Miscellaneous> Internet Protocol.
Note: The information in this section was acquired from Wikipedia.org,
August 2007, which provides further details on many of these topics.
The main improvement of IPv6 is the increase in the number of
addresses available for networked devices. IPv4 supports 232 (about
4.3 billion) addresses. In comparison, IPv6 supports 2128 (about
34×1037) addresses, or approximately 5×1028 addresses for each of
roughly 6.5 billion people. However, that is not the intention of the
designers.
The extended address length simplifies operational considerations,
including dynamic address assignment and router decision-making.
It also avoids many complex workarounds that were necessary in
IPv4, such as Classless Inter-Domain Routing (CIDR). Its simplified
packet header format improves the efficiency of forwarding in
routers. More information on this topic is provided in “Larger
address space” on page 30 and “Addressing” on page 32.
This section contains the following information:
◆
◆
◆
◆
◆
“Features of IPv6” on page 29
“Deployment status” on page 31
“Addressing” on page 32
“IPv6 packet” on page 37
“Transition mechanisms” on page 38
Features of IPv6
To a great extent, IPv6 is a conservative extension of IPv4. Most
transport- and application-layer protocols need little or no change to
work over IPv6. The few exceptions are applications protocols that
embed network-layer addresses (such as FTP or NTPv3).
Applications, however, usually need small changes and a recompile
in order to run over IPv6.
IPv6
29
TCP/IP Technology
The following features of IPv6 will be further discussed in this
section:
◆
◆
◆
◆
◆
◆
Larger address space
“Larger address space” on page 30
“Stateless autoconfiguration of hosts” on page 30
“Multicast” on page 31
“Jumbograms” on page 31
“Network-layer security” on page 31
“Mobility” on page 31
The main feature of IPv6 is the larger address space: 128 bits long
(versus 32 bits in IPv4). The larger address space avoids the potential
exhaustion of the IPv4 address space without the need for network
address translation (NAT) and other devices that break the
end-to-end nature of Internet traffic.
Note: In rare cases, NAT may still be necessary, but it will be difficult in IPv6
so should be avoided whenever possible.
It also makes administration of medium and large networks simpler,
by avoiding the need for complex subnetting schemes. Ideally,
subnetting will revert to its original purpose of logical segmentation
of an IP network for optimal routing and access.
There are a few drawbacks to larger addresses. For instance, in
regions where bandwidth is limited, IPv6 carries some bandwidth
overhead over IPv4. However, header compression can sometimes be
used to alleviate this problem. IPv6 addresses are also harder to
memorize than IPv4 addresses, which are, in turn, harder to
memorize than Domain Name System (DNS) names. DNS protocols
have been modified to support IPv6 as well as IPv4.
For more information, refer to “Addressing” on page 32.
Stateless
autoconfiguration of
hosts
IPv6 hosts can be automatically configured when connected to a
routed IPv6 network. When first connected to a network, a host sends
a link-local (automatic configuration of IP addresses) multicast
(broadcast) request for its configuration parameters. If configured
suitably, routers respond to such a request with a router
advertisement packet that contains network-layer configuration
parameters.
If IPv6 autoconfiguration is not suitable, a host can use stateful
autoconfiguration (DHCPv6) or be configured manually.
30
iSCSI SAN Topologies TechBook
TCP/IP Technology
Note: Stateless autoconfiguration is suitable only for hosts. Routers must be
configured manually or by other means.
Multicast
Network infrastructures, in most environments, are not configured to
route multicast. The link-scoped aspect of multicast (that is, on a
single subnet) will work but the site-scope, organization-scope, and
global-scope multicast will not be routed.
IPv6 does not have a link-local broadcast facility. The same effect can
be achieved by multicasting to the all-hosts group (FF02::1).
The m6bone is catering for deployment of a global IPv6 multicast
network.
Jumbograms
IPv6 has optional support for packets over the IPv4 limit of 64 KB
when used between capable communication partners and on
communication links with a maximum transmission unit larger than
65,576 octets. These are referred to as jumbograms and can be as large
as 4 GB. The use of jumbograms may improve performance over
high-MTU (Maximum Transmission Unit) networks.
An optional feature of IPv6, the jumbo payload option, allows the
exchange of packets larger than this size between cooperating hosts.
Network-layer
security
Mobility
IP security (IPsec), the protocol for IP network-layer encryption and
authentication, is an integral part of the base protocol suite in IPv6. In
IPv4, this is optional (although usually implemented). IPsec is not
widely deployed except for securing traffic between IPv6 Border
Gateway Protocol (BGP) routers (the core routing protocol of the
Internet).
Mobile IPv6 (MIPv6) avoids triangular routing and is as efficient as
normal IPv6. This advantage is mostly hypothetical, since neither
MIP nor MIPv6 are widely deployed.
Deployment status
As of December 2005, IPv6 accounts for only a small percentage of
the live addresses in the Internet, which is still dominated by IPv4.
Many of the features of IPv6 have been ported to IPv4, with the
exception of stateless autoconfiguration, more flexible addressing,
and Secure Neighbor Discovery (SEND).
IPv6
31
TCP/IP Technology
IPv6 deployment is primarily driven by IPv4 address space
exhaustion, which has been slowed by the introduction of classless
inter-domain routing (CIDR) and the extensive use of network
address translation (NAT).
Estimates as to when the pool of available IPv4 addresses will be
exhausted vary widely, ranging from around 2011 (2005 report by
Cisco Systems) to Paul Wilson’s (director of APNIC) prediction of
2023.
To prepare for the inevitable, a number of governments are starting to
require support for IPv6 in new equipment. The U.S. Government, for
example, has specified that the network backbones of all federal
agencies must deploy IPv6 by 2008 and bought 247 billion IPv6
addresses to begin the deployment. The People’s Republic of China
has a 5-year plan for deployment of IPv6, called the “China Next
Generation Internet.”
Addressing
The following subjects are briefly discussed in this section:
◆
◆
◆
◆
◆
◆
◆
128-bit length
“128-bit length” on page 32
“Notation” on page 33
“Literal IPv6 addresses in URLs” on page 33
“Network notation” on page 34
“Types of IPv6 addresses” on page 34
“Special addresses” on page 35
“Zone indices” on page 36
The primary change from IPv4 to IPv6, as discussed in “Larger
address space” on page 30, is the length of network addresses. IPv6
addresses are 128-bits long (as defined by RFC 4291), compared to
IPv4 addresses, which are 32 bits. IPv6 has enough room for 3.4×1038
unique addresses, while the IPv4 address space contains about 4
billion addresses.
IPv6 addresses are typically composed of two logical parts: a 64-bit
(sub-)network prefix and a 64-bit host part, which is either
automatically generated from the interface's Media Access Control
(MAC) address or assigned sequentially. Globally unique MAC
addresses offer an opportunity to track user equipment (and thus
users) across time and IPv6 address changes. In order to restore some
of the anonymity existing in the IPv4, RFC 3041 was developed to
reduce the prospect of user identity being permanently tied to an
32
iSCSI SAN Topologies TechBook
TCP/IP Technology
IPv6 address. RFC 3041 specifies a mechanism by which time-varying
random bit strings can be used as interface circuit identifiers,
replacing unchanging and traceable MAC addresses.
Notation
IPv6 addresses are normally written as eight groups of four
hexadecimal digits. For example, the following is a valid IPv6
address:
2001:0db8:85a3:08d3:1319:8a2e:0370:7334
If one or more four-digit group(s) is 0000, the zeros may be omitted
and replaced with two colons(::). For example,
2001:0db8:0000:0000:0000:0000:1428:57ab can be shortened to
2001:0db8::1428:57ab. Following this rule, any number of consecutive
0000 groups may be reduced to two colons, as long as there is only
one double colon used in an address. Leading zeros in a group can
also be omitted (as in ::1 for localhost). For example, the following
addresses are all valid and equivalent:
2001:0db8:0000:0000:0000:0000:1428:57ab
2001:0db8:0000:0000:0000::1428:57ab
2001:0db8:0:0:0:0:1428:57ab
2001:0db8:0:0::1428:57ab
2001:0db8::1428:57ab
2001:db8::1428:57ab
Note: Having more than one double-colon abbreviation in an address is
invalid, as it would make the notation ambiguous.
A sequence of 4 bytes at the end of an IPv6 address can also be
written in decimal, using dots as separators. This notation is often
used with compatibility addresses. For example, the following two
addresses are the same:
::ffff:1.2.3.4
::ffff:0102:0304 and 0:0:0:0:0:ffff:0102:0304.
Additional information can be found in RFC 4291 — IP Version 6
Addressing Architecture.
Literal IPv6 addresses
in URLs
In a URL the IPv6-Address is enclosed in brackets. For example:
http://[2001:0db8:85a3:08d3:1319:8a2e:0370:7344]/
IPv6
33
TCP/IP Technology
This notation allows parsing a URL without confusing the IPv6
address and port number:
https://[2001:0db8:85a3:08d3:1319:8a2e:0370:7344]:443/
Additional information can be found in RFC 2732 — Format for
Literal IPv6 Addresses in URLs and RFC 3986 — Uniform Resource
Identifier (URI): Generic Syntax.
Network notation
IPv6 networks are written using Classless Inter-Domain Routing
(CIDR) notation.
An IPv6 network (or subnet) is a contiguous group of IPv6 addresses,
the size of which must be a power of two. The initial bits of addresses,
identical for all hosts in the network, are called the network's prefix.
A network is denoted by the first address in the network and the size
in bits of the prefix (in decimal), separated with a slash. For example:
2001:0db8:1234::/48
stands for the network with addresses:
2001:0db8:1234:0000:0000:0000:0000:0000 through
2001:0db8:1234:FFFF:FFFF:FFFF:FFFF:FFFF
Because a single host can be seen as a network with a 128-bit prefix,
you will sometimes see host addresses written followed with:
/128.
Types of IPv6
addresses
34
IPv6 addresses are divided into the following three categories:
◆
Unicast Addresses — Identifies a single network interface. A
packet sent to a unicast address is delivered to that specific
computer.
◆
Multicast Addresses — Used to define a set of interfaces that
typically belong to different nodes instead of just one. When a
packet is sent to a multicast address, the protocol delivers the
packet to all interfaces identified by that address. Multicast
addresses begin with the prefix FF00::/8. Their second octet
identifies the addresses scope, that is, the range over which the
multicast address is propagated. Commonly used scopes include
link-local (2), site-local (5), and global (E).
◆
Anycast Addresses — Also assigned to more than one interface,
belonging to different nodes. However, a packet sent to an
anycast address is delivered to just one of the member interfaces,
typically the “nearest” according to the routing protocol’s idea of
iSCSI SAN Topologies TechBook
TCP/IP Technology
distance. Anycast addresses cannot be easily identified. They
have the structure of normal unicast addresses, and differ only by
being injected into the routing protocol at multiple points in the
network.
Special addresses
There are a number of addresses with special meaning in IPv6:
◆
::/128 — The address with all zeros is an unspecified address, and
is to be used only in software.
◆
::1/128 — The loopback address is a localhost address. If an
application in a host sends packets to this address, the IPv6 stack
will loop these packets back to the same host (corresponding to
127.0.0.1 in IPv4).
◆
::/96 — The zero prefix was used for IPv4-compatible addresses.
It is now obsolete.
◆
::ffff:0:0/96 — This prefix is used for IPv4 mapped addresses (see
“Transition mechanisms” on page 38).
◆
2001:db8::/32 — This prefix is used in documentation (RFC 3849).
Addresses from this prefix should be used anywhere an example
IPv6 address is given.
◆
2002::/16 — This prefix is used for 6to4 addressing.
◆
fc00::/7 — Unique Local Addresses (ULA) are routable only
within a set of cooperating sites. They were defined in RFC 4193
as a replacement for site-local addresses. The addresses include a
40-bit pseudorandom number that minimizes the risk of conflicts
if sites merge or packets somehow leak out. This address space is
split into two parts:
• fc00::/8 — ULA Central, currently not used as the draft is
expired.
• fd00::/8 — ULA, as per RFC 4193, Generator and unofficial
registry.
◆
fe80::/64 — The link-local prefix specifies that the address is valid
only in the local physical link. This is analogous to the
Autoconfiguration IP address 169.254.0.0/16 in IPv4.
◆
fec0::/10 — The site-local prefix specifies that the address is valid
only inside the local organization.
Note: Its use has been deprecated in September 2004 by RFC 3879 and
systems must not support this special type of address.
IPv6
35
TCP/IP Technology
◆
ff00::/8 — The multicast prefix is used for multicast addresses[10]
as defined by in "IP Version 6 Addressing Architecture" (RFC
4291).
There are no address ranges reserved for broadcast in IPv6. Instead,
applications use multicast to the all-hosts group. IANA maintains the
official list of the IPv6 address space. Global unicast assignments can
be found at the various RIRs or at the Ghost Route Hunter (GRH)
DFP pages.
Zone indices
Link-local addresses present a particular problem for systems with
multiple interfaces. Because each interface may be connected to
different networks and the addresses all appear to be on the same
subnet, an ambiguity arises that cannot be solved by routing tables.
For example, host A has two interfaces that automatically receive
link-local addresses when activated (per RFC 2462): fe80::1/64 and
fe80::2/64), only one of which is connected to the same physical
network as host B which has address fe80::3/64. If host A attempts to
contact fe80::3, how does it know which interface (fe80::1 or fe80::2) to
use?
The solution, defined by RFC 4007, is the addition of a unique zone
index for the local interface, represented textually in the form
<address>%<zone_id>. For example:
http://[fe80::1122:33ff:fe11:2233%eth0]:80/
However, this may cause the following problems due to clashing
with the percent-encoding used with URIs.
◆
Microsoft Windows IPv6 stack uses numeric zone IDs: fe80::3%1
◆
BSD applications typically use the interface name as a zone ID:
fe80::3%pcn0
◆
Linux applications also typically use the interface name as a zone
ID: fe80::3%eth0, although Linux ifconfig as of version 1.42 (part
of net-tools 1.60) does not display zone IDs.
Relatively few IPv6-capable applications understand zone ID syntax
(with the notable exception of OpenSSH), rendering link-local
addresses unusable within them if multiple interfaces use link-local
addresses.
36
iSCSI SAN Topologies TechBook
TCP/IP Technology
IPv6 packet
A packet is a formatted block of data carried by a computer network.
Figure 5 shows the structure of an IPv6 packet header.
Figure 5
IPv6 packet header structure
The IPv6 packet is composed of two main parts:
◆
Header
The header is in the first 40 octets (320 bits) of the packet and
contains:
• Both source and destination addresses (128 bits each)
• Version (4-bit IP version)
• Traffic class (8 bits, Packet Priority)
• Flow label (20 bits, QoS management)
• Payload length in bytes (16 bits)
• Next header (8 bits)
• Hop limit (8 bits, time to live)
◆
Payload
The payload can be up to 64 KB in size in standard mode, or
larger with a jumbo payload option (refer to “Jumbograms” on
page 31).
Fragmentation is handled only in the sending host in IPv6. Routers
never fragment a packet, and hosts are expected to use Path MTU
(PMTU) discovery.
IPv6
37
TCP/IP Technology
The protocol field of IPv4 is replaced with a Next Header field. This
field usually specifies the transport layer protocol used by a packet's
payload. In the presence of options, however, the Next Header field
specifies the presence of an Extra Options header, which then follows
the IPv6 header. The payload's protocol itself is specified in a field of
the Options header. This insertion of an extra header to carry options
is analogous to the handling of AH and Encapsulating Security
Payload (ESP) in IPsec for both IPv4 and IPv6.
Transition mechanisms
Until IPv6 completely supplants IPv4, which is not likely to happen
in the near future, a number of so-called transition mechanisms are
needed to enable IPv6-only hosts to reach IPv4 services and to allow
isolated IPv6 hosts and networks to reach the IPv6 Internet over the
IPv4 infrastructure. The following transition mechanisms are briefly
discussed in this section.
Dual stack
◆
“Dual stack” on page 38
◆
“Tunneling” on page 38
◆
“Automatic tunneling” on page 39
◆
“Configured tunneling” on page 39
◆
“Proxying and translation” on page 39
Since IPv6 is a conservative extension of IPv4, it is relatively easy to
write a network stack that supports both IPv4 and IPv6 while sharing
most of the code. Such an implementation is called a dual stack. A host
implementing a dual stack is called a dual-stack host. This approach is
described in RFC 4213.
Most current implementations of IPv6 use a dual stack. Some early
experimental implementations used independent IPv4 and IPv6
stacks. There are no known implementations that implement IPv6
only.
Tunneling
In order to reach the IPv6 Internet, an isolated host or network must
be able to use the existing IPv4 infrastructure to carry IPv6 packets.
This is done using a technique somewhat misleadingly known as
tunnelling that consists of encapsulating IPv6 packets within IPv4, in
effect using IPv4 as a link layer for IPv6.
IPv6 packets can be directly encapsulated within IPv4 packets using
protocol number 41. They can also be encapsulated within UDP
38
iSCSI SAN Topologies TechBook
TCP/IP Technology
packets, for example, in order to cross a router or NAT device that
blocks protocol 41 traffic. They can also use generic encapsulation
schemes, such as Anything In Anything (AYIYA) or Generic Routing
Encapsulation (GRE).
Automatic tunneling
Automatic tunneling refers to a technique where the tunnel
endpoints are automatically determined by the routing
infrastructure. The recommended technique for automatic tunneling
is 6to4 tunneling, which uses protocol 41 encapsulation. Tunnel
endpoints are determined by using a well-known IPv4 anycast
address on the remote side, and embedding IPv4 address information
within IPv6 addresses on the local side. 6to4 tunneling is widely
deployed today.
Another automatic tunneling mechanism is Intra-Site Automatic
Tunnel Addressing Protocol (ISATAP). This protocol treats the IPv4
network as a virtual IPv6 local link, with mappings from each IPv4
address to a link-local IPv6 address.
Teredo is an automatic tunneling technique that uses UDP
encapsulation and is claimed to be able to cross multiple NAT boxes.
Teredo is not widely deployed today, but an experimental version of
Teredo is installed with the Windows XP SP2 IPv6 stack.
Note: IPv6, 6to4, and Teredo are enabled by default in Windows Vista.
Configured tunneling
Configured tunneling is a technique where the tunnel endpoints are
configured explicitly, either by a human operator or by an automatic
service known as a Tunnel Broker. Configured tunneling is usually
more deterministic and easier to debug than automatic tunneling,
and is therefore recommended for large, well-administered networks.
Configured tunneling typically uses either protocol 41
(recommended) or raw UDP encapsulation.
Proxying and
translation
When an IPv6-only host needs to access an IPv4-only service (for
example, a web server), some form of translation is necessary. The
one form of translation that actually works is the use of a dual-stack
application-layer proxy (for example, a web proxy).
Techniques for application-agnostic translation at the lower layers
have also been proposed, but they have been found to be too
unreliable due to the wide range of functionality required by
common application-layer protocols. As such, they are commonly
considered to be obsolete.
IPv6
39
TCP/IP Technology
Internet Protocol security (IPsec)
Internet Protocol security (IPsec) is a set of protocols developed by
the IETF to support secure exchange of packets in the IP layer. IP
Security has been deployed widely to implement Virtual Private
Networks (VPNs).
IP security supports two encryption modes:
◆
Transport
◆
Tunnel
Transport mode encrypts only the payload of each packet, but leaves
the header untouched. The more secure Tunnel mode encrypts both
the header and the payload.
On the receiving side, an IP Security compliant device decrypts each
packet. For IP security to work, the sending and receiving devices
must share a public key. This is accomplished through a protocol
known as Internet Security Association and Key Management
Protocol/Oakley (ISAKMP/Oakley), which allows the receiver to
obtain a public key and authenticate the sender using digital
certificates.
Tunneling and IPsec
Internet Protocol security (IPsec) uses cryptographic security to
ensure private, secure communications over Internet Protocol
networks. IPsec supports network-level data integrity, data
confidentiality, data origin authentication, and replay protection. It
helps secure your SAN against network-based attacks from untrusted
computers, attacks that can result in the denial-of-service of
applications, services, or the network, data corruption, and data and
user credential theft.
By default, when creating an FCIP tunnel, IPsec is disabled.
FCIP tunneling with IPsec enabled will support maximum
throughput as follows:
◆
Unidirectional: approximately 104 MB/sec
◆
Bidirectional: approximately 90 MB/sec
Used to provide greater security in tunneling on an FR4-18i blade or a
Brocade SilkWorm 7500 switch, the IPsec feature does not require you
40
iSCSI SAN Topologies TechBook
TCP/IP Technology
to configure separate security for each application that uses TCP/IP.
When configuring for IPsec, however, you must ensure that there is
an FR4-18i blade or a Brocade SilkWorm 7500 switch in each end of
the FCIP tunnel. IPsec works on FCIP tunnels with or without IP
compression (IPComp).
IPsec requires an IPsec license in addition to the FCIP license.
IPsec terminology
AES
AES-XCBC
Advanced Encryption Standard. FIPS 197 endorses the Rijndael
encryption algorithm as the approved AES for use by US government
organizations and others to protect sensitive information. It replaces
DES as the encryption standard.
Cipher Block Chaining. A key-dependent one-way hash function
(MAC) used with AES in conjunction with the
Cipher-Block-Chaining mode of operation, suitable for securing
messages of varying lengths, such as IP datagrams.
AH
Authentication Header. Like ESP, AH provides data integrity, data
source authentication, and protection against replay attacks but does
not provide confidentiality.
DES
Data Encryption Standard is the older encryption algorithm that uses
a 56-bit key to encrypt blocks of 64-bit plain text. Because of the
relatively shorter key length, it is not a secured algorithm and no
longer approved for Federal use.
3DES
Triple DES is a more secure variant of DES. It uses three different
56-bit keys to encrypt blocks of 64-bit plain text. The algorithm is
FIPS-approved for use by Federal agencies.
ESP
Encapsulating Security Payload is the IPsec protocol that provides
confidentiality, data integrity, and data source authentication of IP
packets, as well as protection against replay attacks.
MD5
Message Digest 5, like SHA-1, is a popular one-way hash function
used for authentication and data integrity.
SHA
Secure Hash Algorithm, like MD5, is a popular one-way hash
function used for authentication and data integrity.
Internet Protocol security (IPsec)
41
TCP/IP Technology
MAC
HMAC
42
Message Authentication Code is a key-dependent, one-way hash
function used for generating and verifying authentication data.
A stronger MAC because it is a keyed hash inside a keyed hash. SA
Security association is the collection of security parameters and
authenticated keys that are negotiated between IPsec peers.
iSCSI SAN Topologies TechBook
2
iSCSI Technology
This chapter provides a brief overview of iSCSI technology.
◆
◆
◆
◆
iSCSI technology overview...............................................................
iSCSI discovery...................................................................................
iSCSI error recovery...........................................................................
iSCSI security ......................................................................................
iSCSI Technology
44
46
47
48
43
iSCSI Technology
iSCSI technology overview
Internet Small Computer System Interface, (iSCSI) is an IP-based
storage networking standard for linking data storage facilities
developed by the Internet Engineering Task Force. By transmitting
SCSI commands over IP networks, iSCSI can facilitate block-level
transfers over the intranet and internet.
The iSCSI architecture is similar to a client/server architecture. In this
case, the client is an initiator that issues an I/O request and the server
is a target (such as a device in a storage system). This architecture can
be used over IP networks to provide distance extension. This can be
implemented between routers, host-to-switch, and storage
array-to-storage array to provide asynchronous/synchronous data
transfer.
Figure 6 shows an example of where iSCSI sits in the network.
Figure 6
44
iSCSI example
iSCSI SAN Topologies TechBook
iSCSI Technology
Figure 7 shows an example of an iSCSI header.
Figure 7
iSCSI header example
Figure 8 defines the fields, size, and functions of the iSCSI header.
Figure 8
iSCSI header fields, size, and functions
iSCSI technology overview
45
iSCSI Technology
iSCSI discovery
In order for an iSCSI initiator to establish an iSCSI session with an
iSCSI target, the initiator needs the IP address, TCP port number, and
iSCSI target name information. The goals of iSCSI discovery
mechanisms are to provide low overhead support for small iSCSI
setups and scalable discovery solutions for large enterprise setups.
The following methods are briefly discussed in this section:
◆
“Static” on page 46
◆
“Send target” on page 46
◆
“iSNS” on page 46
Static
This is the known target IP address, TCP port, and iSCSI name.
Send target
An initiator may log in to an iSCSI target with session type of
discovery and request a list of target WWUIs through a separate
SendTargets command. All iSCSI targets are required to support the
SendTargets command.
iSNS
The iSNS protocol is designed to facilitate the automated discovery,
management, and configuration of iSCSI and Fibre Channel devices
on a TCP/IP network. iSNS provides intelligent storage discovery
and management services comparable to those found in Fibre
Channel networks, allowing a commodity IP network to function in a
similar capacity as a storage area network. iSNS also facilitates a
seamless integration of IP and Fibre Channel networks, due to its
ability to emulate Fibre Channel fabric services, and manage both
iSCSI and Fibre Channel devices. iSNS thereby provides value in any
storage network comprised of iSCSI devices, Fibre Channel devices,
or any other combination.
46
iSCSI SAN Topologies TechBook
iSCSI Technology
iSCSI error recovery
iSCSI supports three levels of error recovery: 0, 1, and 2:
◆
Error recovery level 0 implies session level recovery.
◆
Error recovery level 1 implies level 0 capabilities as well as digest
failure recovery.
◆
Error recovery level 2 implies level 1 capabilities as well as
connection recovery.
The most basic kind of recovery is called session recovery. In session
recovery, whenever any kind of error is detected, the entire iSCSI
session is terminated. All TCP connections connecting the initiator to
the target are closed, and all pending SCSI commands are completed
with an appropriate error status. A new iSCSI session is then
established between the initiator and target, with new TCP
connections.
Digest failure recovery starts if the iSCSI driver detects that data
arrived with an invalid data digest and that data packet must be
rejected. The command corresponding to the corrupted data can then
be completed with an appropriate error indication.
Connection recovery can be used when a TCP connection is broken.
Upon detection of a broken TCP connection, the iSCSI driver can
either immediately complete the pending command with an
appropriate error indication, or can attempt to transfer the SCSI
command to another TCP connection. If necessary, the iSCSI initiator
driver can establish another TCP connection to the target, and the
iSCSI initiator driver can inform the target the change in allegiance of
the SCSI command to another TCP connection.
iSCSI error recovery
47
iSCSI Technology
iSCSI security
Historically, native storage systems have not had to consider security
because their environments offered minimal security risks. These
environments consisted of storage devices either directly attached to
hosts or connected through a Storage Area Network (SAN) distinctly
separate from the communications network.
The use of storage protocols, such as SCSI over IP-networks, requires
that security concerns be addressed. iSCSI implementations must
provide means of protection against active attacks (such as,
pretending to be another identity, message insertion, deletion,
modification, and replaying) and passive attacks (such as,
eavesdropping, gaining advantage by analyzing the data sent over
the line).
Although technically possible, iSCSI should not be configured
without security. iSCSI configured without security should be
confined, in extreme cases, to closed environments without any
security risk.
This section provides basic information on:
◆
“Security mechanisms” on page 48
◆
“Authentication methods” on page 49
Security mechanisms
The entities involved in iSCSI security are the initiator, target, and IP
communication end points. iSCSI scenarios in which multiple
initiators or targets share a single communication end points are
expected. To accommodate such scenarios, iSCSI uses two separate
security mechanisms:
◆
In-band authentication between the initiator and the target at the
iSCSI connection level (carried out by exchange of iSCSI Login
PDUs).
◆
Packet protection (integrity, authentication, and confidentiality)
by IPsec at the IP level.
The two security mechanisms complement each other. The in-band
authentication provides end-to-end trust (at login time) between the
iSCSI initiator and the target while IPsec provides a secure channel
between the IP communication end points.
48
iSCSI SAN Topologies TechBook
iSCSI Technology
Authentication methods
The authentication methods that can be used are:
CHAP (Challenge Handshake Authentication Protocol)
The Challenge-Handshake Authentication Protocol (CHAP) is used
to periodically verify the identity of the peer using a three-way
handshake. This is done upon establishing initial link and may be
repeated anytime after the link has been established. CHAP provides
protection against playback attack by the peer through the use of an
incrementally changing identifier and a variable challenge value. The
use of repeated challenges is intended to limit the time of exposure to
any single attack. The authenticator is in control of the frequency and
timing of the challenges. This authentication method depends upon a
"secret" known only to the authenticator and that peer. The secret is
not sent over the link.
SRP (Secure Remote Password)
This mechanism is suitable for negotiating secure connections using a
user-supplied password, while eliminating the security problems
traditionally associated with reusable passwords. This system also
performs a secure key exchange in the process of authentication,
allowing security layers (privacy and/or integrity protection) to be
enabled during the session. Trusted key servers and certificate
infrastructures are not required, and clients are not required to store
or manage any long-term keys.
KRB5 (Kerberos V5)
Kerberos provides a means of verifying the identities of principals,
(such as a workstation user or a network server) on an open
(unprotected) network. This is accomplished without relying on
authentication by the host operating system, or basing trust on host
addresses, or requiring physical security of all the hosts on the
network, and under the assumption that packets traveling along the
network can be read, modified, and inserted at will. Kerberos
performs authentication under these conditions as a trusted
third-party authentication service by using conventional
cryptography such as a shared secret key.
iSCSI security
49
iSCSI Technology
SPKM1 & 2 (Simple Public Key GSS-API Mechanism)
This mechanism provides authentication, key establishment, data
integrity, and data confidentiality in an on-line distributed
application environment using a public-key infrastructure. SPKM can
be used as a drop-in replacement by any application which makes
use of security services through GSS-API calls (for example, any
application which already uses the Kerberos GSS-API for security).
Digests
Digests enable the checking of end-to-end, non-cryptographic data
integrity beyond the integrity checks provided by the link layers and
the cover the entire communication path including all elements that
may change the network level PDUs such as routers, switches, and
proxies.
Optional header and data digests protect the integrity of the header
and data, respectively. The digests, if present, are located after the
header and PDU-specific data and cover the header and the PDU
data, each including the padding bytes, if any. The existence and type
of digests are negotiated during the Login phase. The separation of
the header and data digests is useful in iSCSI routing applications,
where only the header changes when a message is forwarded. In this
case, only the header digest should be recalculated.
IPSec
IPSec is used for encryption and IP-level protection. It uses
◆
Authentication Header (AH)
◆
Encapsulating Security Payload (ESP)
◆
Internet Key Exchange (IKE)
IPSec is supported on the 1 G for iSCSI.
For more information on IPSec, refer to “Internet Protocol security
(IPsec)” on page 40.
50
iSCSI SAN Topologies TechBook
3
iSCSI Solutions
This chapter provides the following information on iSCSI solutions.
◆
◆
◆
◆
◆
Network design best practices .........................................................
EMC native iSCSI targets ..................................................................
Configuring iSCSI targets .................................................................
Bridged solutions ...............................................................................
Summary .............................................................................................
iSCSI Solutions
52
53
58
60
69
51
iSCSI Solutions
Network design best practices
Consider the following best practices when designing the network:
52
◆
The network should be dedicated solely to the IP technology
being used and other traffic should be carried over it.
◆
The network must be a well-engineered network with no packet
loss or packet duplication. This would lead to retransmission,
which is undesirable.
◆
While planning the network, care must be taken to ascertain that
the utilized throughput will never exceed the available
bandwidth. Oversubscribing available bandwidth will lead to
network congestion, which causes dropped packets and leads to
TCP slow start. Network congestion must be considered between
switches as well as between the switch and the end device.
◆
The MTU must be configured based on the maximum available
MTU supported by each component on the network.
◆
Make sure that all hosts can access the network via multiple paths
using different subnets. This will ensure maximum availability.
iSCSI SAN Topologies TechBook
iSCSI Solutions
EMC native iSCSI targets
This section discusses the following EMC® native iSCSI targets:
◆
“Symmetrix” on page 53
◆
“VNX for Block and CLARiiON” on page 54
◆
“Celerra Network Server” on page 55
◆
“VNX series for File” on page 56
Symmetrix
This section describes the EMC Symmetrix® VMAX™, DMX-4, and
DMX-3.
VMAX, DMX-4, DMX-3
The iSCSI channel director supports iSCSI channel connectivity to IP
networks and to iSCSI-capable open systems server systems for block
storage transfer between hosts and storage. The primary applications
are storage consolidation and host extension for stranded servers and
departmental workgroups.
◆
The Symmetrix DMX iSCSI provides 1 Gb/s Ethernet ports and
connects through LC connectors.
◆
The Symmetrix VAMX iSCSI provides 1 Gb/s Ethernet ports and
also connects through LC connectors. With EMC Enginuity™ 5875
code, both 1 Gb/s and 10 Gb/s is supported.
The iSCSI directors support the iSNS protocol. CHAP (Challenge
Handshake Authentication Protocol) is the supported authentication
mechanism. LUNs are configured in the same manner as for Fibre
Channel directors and are assigned to the iSCSI ports. LUN masking
is available. Both the 10 Gb/s (VMAX) and 1 Gb/s (DMX/VMAX)
ports support IPv4 and IPv6.
References
For configuration of Symmetrix iSCSI target please check the
Symmetrix configuration guide.
For up-to-date iSCSI host support please refer to EMC Support Matrix,
available through E-Lab Interoperability Navigator at:
http://elabnavigator.EMC.com.
For configuration of iSCSI server, please check the respective host
connectivity guide.
EMC native iSCSI targets
53
iSCSI Solutions
VNX for Block and CLARiiON
EMC VNX™ for Block and CLARiiON® native iSCSI targets include:
VNX
5300/5500/5700/7500
This can be configured as a combination of a 10/1 Gb iSCSI and 8 Gb
Fibre Channel array. iSNS protocol is supported. Authentication
mechanism is Challenge Handshake Authentication Protocol
(CHAP). LUNs are configured in the same manner as for Fibre
Channel arrays and are assigned to a storage group.
CX4 120/240/480/960
This can be configured as a combination of a 10/1 Gb iSCSI and 8 Gb
Fibre Channel array. iSNS protocol is supported. Authentication
mechanism is CHAP. LUNs are configured in the same manner as for
Fibre Channel arrays and are assigned to a storage group.
CX3-20/CX3-40
This can be configured as an iSCSI array or Fibre Channel array. All
iSCSI ports on the array are 1 Gb/s Ethernet ports. iSNS protocol is
supported. Authentication mechanism is CHAP.
LUNs are configured in the same manner as for Fibre Channel array
and are assigned to a storage group.
CX300i/500i
These are dedicated iSCSI arrays. All iSCSI ports on the array are 1
Gb/s Ethernet ports. iSNS protocol is supported. Authentication
mechanism is CHAP.
LUNs are configured in the same manner as for Fibre Channel array
and are assigned to a storage group.
AX150/100i
These are dedicated iSCSI arrays. All iSCSI ports on the array are 1
Gb/s Ethernet ports. iSNS protocol is supported. Authentication
mechanism is CHAP.
LUNs are configured in the same manner as for Fibre Channel array
and are assigned to a storage group.
References
For configuration of CLARiiON iSCSI target please check the
CLARiiON configuration guide.
For up-to-date iSCSI host support please refer to EMC Support Matrix,
available through E-Lab Interoperability Navigator at:
http://elabnavigator.EMC.com.
For configuration of iSCSI server, please check the respective host
connectivity guide.
54
iSCSI SAN Topologies TechBook
iSCSI Solutions
Celerra Network Server
Note: This configuration is available on pre-VNX series systems.
EMC Celerra® native iSCSI targets include:
EMC Celerra Network Server provides iSCSI target capabilities
combined with NAS capabilities, as shown in Figure 9 on page 55.
The Celerra iSCSI system is defined by creating a file system. The file
system is build on Fibre Channel LUNs accessible on EMC
Symmetrix or CLARiiON arrays. The file system is then mounted on
the Celerra server data movers. Out of the file system iSCSI LUNs are
defined and allocated to iSCSI targets. The targets are then associated
with one of the Celerra TCP/IP interfaces.
TCP / IP network
or direct connect
iSCSI
initiator
Celerra Network
Attach Storage (NAS)
TCP / IP network
or direct connect
iSCSI
initiator
Fibre Channel
Fabric
CLARiiON
FC targets
Fibre Channel
Fabric
Celerra Network
Attach Storage (NAS)
Symmetrix
FC targets
ICO-IMG-000952
Figure 9
Celerra iSCSI configurations
All Celerra Network Servers can be configured to provide iSCSI
services. The following are some of the characteristics of the Celerra
Network Server:
◆
iSCSI error recovery level 0 (session-level recovery).
◆
Supports CHAP with unlimited entries for one-way
authentication and one entry for reverse authentication.
◆
Uses iSNS protocol for discovery.
◆
Provides 10 Gb/s and 1 Gb/s interfaces
◆
Supports EMC storage Symmetrix and CLARiiON on the back
end.
EMC native iSCSI targets
55
iSCSI Solutions
Implementation best practices
The following information is provided to help you estimate size
requirements for iSCSI LUNs and provides guidelines for configuring
iSCSI on the Celerra Network Server.
Estimate size requirements for the file system.
◆
When using regular iSCSI LUNs, the file system should be large
enough to hold the LUNs and the planned snapshots of those
LUNs. Each iSCSI snapshot may require the same amount of
space on the file system as the LUN.
Create and mount file systems for iSCSI LUNs.
◆
The next step in configuring iSCSI targets on a Celerra Network
Server is to create and mount one or more file systems to provide
a dedicated storage resource for the iSCSI LUNs. Create and
mount a file system through Celerra Manager or the CLI. The
Celerra Manager Online Help and the technical module Managing
Celerra Volumes and File Systems Manually provide instructions.
VNX series for File
IMPORTANT
iSCSI functionality is available for the VNX unified storage
platforms and Gateway file systems, but must first be enabled by
EMC Customer Service.
VNX 5000 series
Unified storage system
The VNX 5000 series unified storage system implements a modular
architecture that integrates hardware components for block, file, and
object with concurrent support for native NAS, iSCSI, Fibre Channel,
and FCoE protocols. Figure 10 shows an example of a VNX 5000
series unified storage systems configuration.
TCP / IP network
or direct connect
iSCSI
initiator
VNX 5xxx
CLARiiON
FC targets
ICO-IMG-000951
Figure 10
56
VNX 5000 series iSCSI configuration
iSCSI SAN Topologies TechBook
iSCSI Solutions
VNX series Gateway
VG2
The EMC VNX series Gateway VG2 platform delivers a
comprehensive, consolidated solution that adds NAS storage in a
centrally managed information storage system. Figure 11 shows an
example of a VNX series Gateway VG2 configuration.
Fibre Channel
Fabric
TCP / IP network
or direct connect
VNX-VG2
iSCSI
initiator
TCP / IP network
or direct connect
iSCSI
initiator
CLARiiON
FC targets
Fibre Channel
Fabric
VNX-VG2
Symmetrix VMAX
FC targets
ICO-IMG-000950
Figure 11
VNX VG2 iSCSI configuration
EMC native iSCSI targets
57
iSCSI Solutions
Configuring iSCSI targets
This section lists the tasks you must perform to configure iSCSI
targets and LUNs on the Celerra Network Server.
The online Celerra man pages and the Celerra Network Server
Command Reference Manual provide detailed descriptions of the
commands used in these procedures.
1. Create iSCSI targets:
You need to create one or more iSCSI targets on the Data Mover
so an iSCSI initiator can establish a session and exchange data
with the Celerra Network Server.
2. Create iSCSI LUNs:
After creating an iSCSI target, you must create iSCSI LUNs on the
target. The LUNs provide access to the storage space on the
Celerra Network Server. From the point of view of a client
system, a Celerra iSCSI LUN appears as any other disk device.
3. Create iSCSI LUN masks:
On the Celerra Network Server, a LUN mask on a target controls
incoming iSCSI access by granting or denying an iSCSI initiator
access to specific iSCSI LUNs on that target. When created, an
iSCSI target has no LUN masks, which means no initiator can
access LUNs on that target. To enable an initiator to access LUNs
on a target, you need to create a LUN mask to specify the initiator
and the LUNs it can access.
4. Configure iSNS on the Data Mover (optional):
If you want iSCSI initiators to automatically discover the iSCSI
targets on a Data Mover, you can configure an iSNS client on the
Data Mover. Configuring an iSNS client on the Data Mover
causes the Data Mover to register all of its iSCSI targets with an
external iSNS server. iSCSI initiators can then query the iSNS
server to discover the available targets on the Data Movers.
5. Create CHAP entries (optional):
If you want a Data Mover to authenticate the identity of each
iSCSI initiator, configure CHAP authentication on the Data
Mover. To configure CHAP, you must:
a. Set the appropriate parameters so targets on the Data Mover
require CHAP authentication.
58
iSCSI SAN Topologies TechBook
iSCSI Solutions
b. Create a CHAP entry for each initiator that contacts the Data
Mover. CHAP entries are configured on each Data Mover.
Each initiator has a unique CHAP secret for the Data Mover.
c. In some cases, initiators authenticate the identity of the targets
as well. In this case, you must configure a CHAP entry for
reverse authentication. Reverse authentication entries differ
from regular CHAP entries because each Data Mover can have
only one CHAP secret. The Data Mover uses the same CHAP
secret when any iSCSI initiator authenticates a target on the
Data Mover.
6. Start the iSCSI service:
Before using iSCSI targets on the Celerra Network Server, you
must start the iSCSI service on the Data Mover.
References
For more information please refer to Configuring iSCSI Targets on
Celerra, available on EMC Online Support at
https://support.emc.com.
Configuring iSCSI targets
59
iSCSI Solutions
Bridged solutions
The following switches are discussed in this section:
◆
“Brocade”, next
◆
“Cisco” on page 63
Brocade
The FC4-16IP iSCSI gateway service is an intermediate device in the
network, allowing iSCSI initiators in an IP SAN to access and utilize
storage in a Fibre Channel (FC) SAN.
Supported
configurations
The iSCSI gateway enables applications on an IP network to use an
iSCSI initiator to connect to FC targets. The iSCSI gateway translates
iSCSI protocol to Fibre Channel Protocol (FCP), bridging the IP
network and FC SAN.
Note: The FC4-16IP iSCSI gateway service is not compatible with other iSCSI
gateway platforms, including Brocade iSCSI Gateway or the SilkWorm
Multiprotocol Router.
Figure 12 shows a basic iSCSI gateway service implementation.
FC target 1
FC4-16IP
iSCSI gateway
iSCSI
initiator
LUNs
IP
network
SAN
LUNs
FC target 2
ICO-IMG-000942
Figure 12
iSCSI gateway service basic implementation
The Brocade FC4-16IP blade acts as an iSCSI gateway between
FC-attached targets and iSCSI initiators. On the iSCSI initiator, iSCSI
is mapped between the SCSI driver and the TCP/IP stack. At the
iSCSI gateway port, the incoming iSCSI data is converted to FCP
60
iSCSI SAN Topologies TechBook
iSCSI Solutions
(SCSI on FC) by the iSCSI virtual initiator and then forwarded to the
FC target. This allows low-cost servers to leverage an existing FC
infrastructure.
To represent all iSCSI initiators and sessions, each iSCSI portal has
one iSCSI virtual initiator (VI) to the FC fabric that appears as an
N_Port device with a special WWN format. Regardless of the number
of iSCSI initiators or iSCSI sessions sharing the portal, Fabric OS uses
one iSCSI VI per iSCSI portal.
Fabric OS provides a mechanism that maps LUNs to iSCSI VTs, a
one-to-one mapping with unique iSCSI Qualified Names (IQNs) for
each target. It presents an iSCSI VT for each native FC target to the IP
network and an iSCSI VI for each iSCSI port to the FC fabric.
Fabric OS also supports more complicated configurations, allowing
each iSCSI VT to be mapped to one or more physical FC targets. Each
FC target can have one or more LUNs. Physical LUNs can be mapped
to different virtual LUNs.
Implementation best
practices
Table 1
Table 1 lists scalability guidelines, restrictions, and limitations:
Scalability guidelines (page1 of 2)
# of iSCSI sessions per port
64
# of iSCSI ports per FC4-16IP blade
8
# of iSCSI blades in a switch
4
# of iSCSI sessions per FC4-16IP blade
512
# of iSCSI sessions per switch
1024
# of TCP sessions per switch
1024
# of TCP connections per iSCSI session
2
# of iSCSI sessions per fabric
4096
# of TCP connections per fabric
4096
# of iSCSI targets per fabric
4096
# CHAP entries per fabric
4096
# LUNS per iSCSI target
256
Bridged solutions
61
iSCSI Solutions
Table 1
Scalability guidelines (page2 of 2)
# Members per discovery domain
64
# Discovery domains per discovery domain set
4096
# of Discovery domain sets
4
The following are installation tips and recommendations:
References
◆
All iSCSI Virtual Initiators should be included in the zone with
specified target.
◆
All iSCSI VIs must be registered on the CLARiiON array and
added to the appropriate storage groups.
◆
All iSCSI VIs must be added to the Symmetrix VCM database, if
utilizing the device masking functionality.
◆
If the FC targets use access control lists/database, you must add
the FC NWWN/WWPN of the Ironman blade to the
ACL/database (fclunquery -s to determine Ironman FC
NWWN/WWPN).
◆
Recommend masking all LUNS for all VIs and performing the
LUN masking functionality from the Ironman blade by creating
individual iSCSI Virtual Targets and assigning the LUNS to the
appropriate iSCSI Virtual Target.
◆
Firmware upgrades are not online events for the Ironman GigE
ports, so plan accordingly.
◆
The fcLunQuery command only gets addresses from targets that
support the ReportLuns command.
All Brocade documentation can be located at
http://www.brocade.com. Click Brocade Connect to register, at no
cost, for a user ID and password.
The following documentation is available for Fabric OS:
62
◆
Fabric OS Administrator’s Guide
◆
Fabric OS Command Reference
◆
Fabric OS MIB Reference
◆
Fabric OS Message Reference
◆
Brocade Glossary
iSCSI SAN Topologies TechBook
iSCSI Solutions
The following documentation is available for SilkWorm 48000
director and iSCSI blade:
◆
SilkWorm 48000 Hardware Reference Manual
◆
iSCSI Gateway Service Administrator’s Guide
◆
FC4-16IP Hardware Reference Manual
Cisco
Cisco MDS 9000 storage switches are multiprotocol switches that
support the Fibre Channel and Gigabit Ethernet (FCIP and iSCSI)
protocols. Each switch model can be used as a Fibre Channel-iSCSI
gateway to support iSCSI solutions with Fibre Channel targets
(Symmetrix, VNX series, and CLARiiON).
Cisco MDS 9000 family IP storage (IPS) services extend the reach of
Fibre Channel SANs by using open-standard, IP-based technology.
The switch allows IP hosts to access Fibre Channel storage using the
iSCSI protocol. The iSCSI feature is specific to the IPS module and is
available in Cisco MDS 9200 Switches or Cisco MDS 9500 Directors.
The Cisco MDS 9216i switch and the 14/2 Multiprotocol Services
(MPS-14/2) module also allow you to use Fibre Channel, FCIP, and
iSCSI features. The MPS-14/2 module is available for use in any
switch in the Cisco MDS 9200 Series or Cisco MDS 9500 Series.
Supported
configurations
Initiator presentation modes (transparent and proxy)
The two modes available to present iSCSI hosts in the Fibre Channel
fabric are transparent initiator mode and proxy initiator mode.
◆
In transparent initiator mode, each iSCSI host is presented as one
virtual Fibre Channel host. The benefit of transparent mode is it
allows a finer level of Fibre Channel access control configuration
(similar to managing a "real" Fibre Channel host). Because of the
one-to-one mapping from iSCSI to Fibre Channel, each host can
have different zoning or LUN access control on the Fibre Channel
storage device.
◆
In proxy-initiator mode, there is only one virtual Fibre Channel
host per one IPS port that all iSCSI hosts use to access Fibre
Channel targets. In a scenario where the Fibre Channel storage
device requires explicit LUN access control for every host, the
static configuration for each iSCSI initiator can be overwhelming.
In such case, using the proxy-initiator mode simplifies the
configuration.
Bridged solutions
63
iSCSI Solutions
Figure 13 shows an example of a supportable configuration.
IPS port
configured for
iSCSI on VSAN A
Cisco
MDS 9000
iSCSI
host
Dedicated
Well-engineered
Layer 2 network
iSCSI
host
iSCSI
host
VSAN A
FC fabric
iSCSI
host
Symmetrix
Figure 13
CLARiiON
VNX
ICO-IMG-000947
Supportable configuration example
The following iSCSI configurations are supported:
◆
The Cisco MDS switches can be used as Fibre Channel-iSCSI
gateway to run applications using an iSCSI initiator to
Symmetrix, VNX series, and CLARiiON storage devices.
◆
Host-based redundancy is supported through the use of EMC
PowerPath®.
iSCSI configuration has the following limits:
64
◆
The maximum number of iSCSI initiators supported in a fabric is
1800.
◆
The maximum number of iSCSI sessions supported by an IPS port
in either transparent or proxy initiator mode is 300.
◆
The maximum number of iSCSI session support by switch is 5000.
iSCSI SAN Topologies TechBook
iSCSI Solutions
◆
The maximum number of iSCSI targets supported in a fabric is
6000.
Configuration overview
To use the iSCSI feature, you must explicitly enable iSCSI on the
required switches in the fabric. By default, this feature is disabled in
all switches in the Cisco MDS 9000 family. Each physical Gigabit
Ethernet interface on an IPS module or MPS-14/2 module can be
used to translate and route iSCSI requests to Fibre Channel targets
and responses in the opposite direction. To enable this capability, the
corresponding iSCSI interface must be in an enabled state.
Presenting Fibre Channel Targets as iSCSI Targets
The IPS module or MPS-14/2 module presents physical Fibre
Channel targets as iSCSI virtual targets, allowing them to be accessed
by iSCSI hosts. It does this in one of two ways:
◆
Dynamic mapping — Automatically maps all the Fibre Channel
target devices/ports as iSCSI devices. Use this mapping to create
automatic iSCSI target names.
◆
Static mapping — Manually creates iSCSI target devices and
maps them to the whole Fibre Channel target port or a subset of
Fibre Channel LUNs. With this mapping, you must specify
unique iSCSI target names.
Presenting iSCSI hosts as virtual Fibre Channel hosts
The IPS module or MPS-14/2 module connects to the Fibre Channel
storage devices on behalf of the iSCSI host to send commands and
transfer data to and from the storage devices. These modules use a
virtual Fibre Channel N_Port to access the Fibre Channel storage
devices on behalf of the iSCSI host. iSCSI hosts are identified by
either iSCSI qualified name (IQN) or IP address.
Initiator identification
iSCSI hosts can be identified by the IPS module or MPS-14/2 module
using the following:
◆
iSCSI qualified name (IQN)
An iSCSI initiator is identified based on the iSCSI node name it
provides in the iSCSI login. This mode can be useful if an iSCSI
host has multiple IP addresses and you want to provide the same
service independent of the IP address used by the host. An
initiator with multiple IP addresses (multiple network interface
cards, NICs) has one virtual N_Port on each IPS port to which it
logs into.
Bridged solutions
65
iSCSI Solutions
◆
IP address
An iSCSI initiator is identified based on the IP address of the
iSCSI host. This mode is useful if an iSCSI host has multiple IP
addresses and you want to provide different service-based on the
IP address used by the host. It is also easier to get the IP address
of a host compared to getting the iSCSI node name. A virtual
N_Port is created for each IP address it uses to log in to iSCSI
targets. If the host using one IP address logs in to multiple IPS
ports, each IPS port will create one virtual N_Port for that IP
address.
You can configure the iSCSI initiator identification mode on each IPS
port and all the iSCSI hosts terminating on the IPS port will be
identified according to that configuration. The default mode is to
identify the initiator by name.
iSCSI access control
Two mechanisms of access control are available for iSCSI devices.
◆
Fibre Channel zoning-based access control
◆
iSCSI ACL-based access control
Depending on the initiator mode used to present the iSCSI hosts in
the Fibre Channel fabric, either or both access control mechanisms
can be used.
Fibre Channel zoning-based access control
Cisco SAN-OS VSAN and zoning concepts have been extended to
cover both Fibre Channel devices and iSCSI devices. Zoning is the
standard access control mechanism for Fibre Channel devices, which
is applied within the context of a VSAN. Fibre Channel zoning has
been extended to support iSCSI devices, and this extension has the
advantage of having a uniform, flexible access control mechanism
across the whole SAN.
◆
Fibre Channel device WWPN.
◆
Interface and switch WWN. Device connecting through that
interface is within the zone.
In the case of iSCSI, multiple iSCSI devices may be connected behind
an iSCSI interface. Interface-based zoning may not be useful because
all the iSCSI devices behind the interface will automatically be within
the same zone.
In transparent initiator mode (where one Fibre Channel virtual
N_Port is created for each iSCSI host), the standard Fibre Channel
66
iSCSI SAN Topologies TechBook
iSCSI Solutions
device WWPN-based zoning membership mechanism can be used if
an iSCSI host has static WWN mapping.
Zoning membership mechanism has been enhanced to add iSCSI
devices to zones based on the following:
◆
IPv4 address/subnet mask
◆
IPv6 address/prefix length (currently EMC does not support ip
version 6)
◆
iSCSI qualified name (IQN)
◆
Symbolic-node-name (IQN)
For iSCSI hosts that do not have a static WWN mapping, the feature
allows the IP address or iSCSI node name to be specified as zone
members. Note that iSCSI hosts that have static WWN mapping can
also use these features. IP address-based zone membership allows
multiple devices to be specified in one command by providing the
subnet mask.
iSCSI-based access control
iSCSI-based access control is applicable only if static iSCSI virtual
targets are created. For a static iSCSI target, you can configure a list of
iSCSI initiators that are allowed to access the targets.
By default, static iSCSI virtual targets are not accessible to any iSCSI
host. You must explicitly configure accessibility to allow an iSCSI
virtual target to be accessed by all hosts. The initiator access list can
contain one or more initiators. The iSCSI initiator can be identified by
one of the following mechanisms:
◆
iSCSI node name
◆
IPv4 address and subnet
◆
IPv6 address (currently EMC does not support IP version 6)
Note: For a transparent mode iSCSI initiator, if both Fibre Channel zoning
and iSCSI ACLs are used, for every static iSCSI target that is accessible to the
iSCSI host, the initiator's virtual N_Port should be in the same Fibre Channel
zone as the Fibre Channel target.
iSCSI session authentication
The IPS module or MPS-14/2 module supports the iSCSI
authentication mechanism to authenticate the iSCSI hosts that request
access to the storage devices. By default, the IPS modules or
MPS-14/2 modules allow CHAP or None authentication of iSCSI
Bridged solutions
67
iSCSI Solutions
initiators. If authentication is always used, you must configure the
switch to allow only CHAP authentication.
For CHAP user name or secret validation, you can use any method
supported and allowed by the Cisco MDS AAA infrastructure. AAA
authentication supports a RADIUS, TACACS+, or local
authentication device.
iSCSI immediate data and unsolicited data features
Cisco MDS switches support the iSCSI immediate data and
unsolicited data features if requested by the initiator during the login
negotiation phase. Immediate data is iSCSI write data contained in the
data segment of an iSCSI command protocol data unit (PDU), such as
combining the write command and write data together in one PDU.
Unsolicited data is iSCSI write data that an initiator sends to the iSCSI
target, such as an MDS switch, in an iSCSI data-out PDU without
having to receive an explicit ready to transfer (R2T) PDU from the
target.
These two features help reduce I/O time for small write commands
because it removes one round-trip between the initiator and the
target for the R2T PDU. As an iSCSI target, the MDS switch allows up
to 64 KB of unsolicited data per command. This is controlled by the
FirstBurstLength parameter during iSCSI login negotiation phase.
If an iSCSI initiator supports immediate data and unsolicited data
features, these features are automatically enabled on the MDS switch
with no configuration required.
Implementation best
practices
Symmetrix setup
Symmetrix SRDF ports should be configured as standard Fibre
Channel SRDF ports. In a Fibre Channel environment, the Cisco MDS
switch provides all the services of a Fibre Channel switch, similar to
those provided by any other Fibre Channel switch.
VNX series setup
VNX ports should be configured as standard Fibre Channel target
ports for iSCSI configurations.
CLARiiON setup
CLARiiON ports should be configured as standard Fibre Channel
target ports for iSCSI configurations.
References
68
All documentation can be found at www.cisco.com. Please search on
Cisco MDS Configuration Guide and choose the guide relevant to the
code running in your environment.
iSCSI SAN Topologies TechBook
iSCSI Solutions
Summary
Table 2 compares the iSCSI solution features available.
Table 2
iSCSI solution features comparison table
Celerra
Brocade
MDS
Symmetrix (N)
VNX series
CLARiiON (N)
XtremIO
Jumbo frames
yes
yes
yes
yes
yes
yes
no
O/S support
Everything
but AIX
Refer to the
EMC Support
Matrix
Refer to the
EMC Support
Matrix
Refer to the
EMC Support
Matrix
Refer to the
EMC Support
Matrix
Refer to the
EMC Support
Matrix
Refer to the
EMC Support
Matrix
Number of
initiators per
port/box
256, but
check the
EMC Support
Matrix
64/512
(This is per
port/blade)
300/2000
Refer to the
EMC Support
Matrix
• Symmetrix
DMX: 512
• Symmetrix
VMAX: 1024
Refer to Table 3
on page 70.
Refer to the
EMC Support
Matrix
Refer to
Table 3 on
page 70.
Refer to the
EMC Support
Matrix
• Per initiator
group: 64
• Per cluster:
128
Refer to the
EMC Support
Matrix
Proxy initiator
yes
yes
500/2000
Refer to the
EMC Support
Matrix
n/a
n/a
n/a
n/a
header/data
digest
yes
yes
yes
yes
yes
yes
no
Immediate data
yes
yes
yes
• Symmetrix
DMX: no
• Symmetrix
VMAX: yes
no
no
yes
Initial R2T
yes
yes
yes
• Symmetrix
DMX: yes
• Symmetrix
VMAX: yes
no
no
no
Authentication
yes, CHAP
yes, CHAP
yes
• Symmetrix
DMX: yes,
CHAP
• Symmetrix
VMAX: yes,
CHAP
yes, CHAP
yes, CHAP
no
Encryption
no
no
no
no
no
no
no
PP support
yes,
for all
supported
environments
yes
Refer to the
EMC Support
Matrix for all
supported
environments
yes
Refer to the
EMC Support
Matrix for all
supported
environments
yes
Refer to the
EMC Support
Matrix for all
supported
environments
yes
Refer to the
EMC Support
Matrix for all
supported
environments
yes
Refer to the
EMC Support
Matrix for all
supported
environments
yes
Refer to the
EMC Support
Matrix for all
supported
environments
Summary
69
iSCSI Solutions
The EMC Support Matrix is available through E-Lab Interoperability
Navigator at http://elabnavigator.EMC.com.
Table 3 lists information on VNX and CX4 front-end port support.
VNX series and CLARiiON CX4 front-end port support
Table 3
70
Front End Ports
CX4-120
CX4-240
CX4-480
CX4-960 VNX 5300
VNX 5500
VNX
5700
VNX
7500
Max 1 Gb/s iSCSI ports per SP/
per Storage System
4/8
8/16
8/16
8/16
4/8
8/16
12/24
12/24
Max 10 Gb/s iSCSI ports per
SP/ per Storage System
2/4
2/4
4/8
4/8
4/8
4/8
6/8
6/8
Max initiators/1 Gb/s iSCSI port 256
256
256
256
256
512
1,024
1,024
Max initiators/10 Gb/s iSCSI
port
256
512
1,024
1,024
256
512
1,024
1,024
Max VLANs/10 Gb/s iSCSI port
8
8
8
8
8
8
8
8
Max VLANs/1 Gb/s iSCSI port
2
2
2
2
8
8
8
8
iSCSI SAN Topologies TechBook
4
Use Case Scenarios
This chapter provides the following use case scenarios.
◆
◆
◆
◆
Connecting an iSCSI Windows host to a VMAX array................. 72
Connecting an iSCSI Linux host to a VMAX array ....................... 99
Configuring the VNX for block 1 Gb/10 Gb iSCSI port............. 117
Connecting an iSCSI Windows host to an XtremIO array ......... 173
Use Case Scenarios
71
Use Case Scenarios
Connecting an iSCSI Windows host to a VMAX array
Figure 14 shows a Windows host connected to a VMAX array. This
scenario will be used in this use case study.
This section includes the following information:
◆
◆
◆
◆
◆
◆
“Configuring storage port flags and an IP address on a VMAX
array” on page 72
“Configuring LUN Masking on a VMAX array” on page 77
“Configuring an IP address on a Windows host” on page 79
“Configuring iSCSI on a Windows host” on page 81
“Configuring Jumbo frames” on page 97
“Setting MTU on a Windows host” on page 97
VMAX
Subnet IPv6
IPV6
2001:db8:0:f108::2
IPV6
2001:db8:0:f109::2
Subnet IPv6
Router
Windows Server
SE 9G:0
2001:db8:0:f108::1
SE 10G:0
2001:db8:0:f109::1
PowerPath
ICO-IMG-000986
Figure 14
Windows host connected to a VMAX array with 1 G connectivity
This setup consists of a Windows host connected to a VMAX array as
follows:
1. The Windows host is connected via two paths with 1 G iSCSI and
IPv6.
2. The VMAX array is connected via two paths for 1 G and 10 G
iSCSI each.
3. PowerPath is installed on the host.
Configuring storage port flags and an IP address on a VMAX array
The following two methods discussed in this section can be used to
configure storage and port flags and an IP address on a VMAX array:
72
◆
“Symmetrix Management Console” on page 73
◆
“Solutions Enabler” on page 76
iSCSI SAN Topologies TechBook
Use Case Scenarios
Symmetrix
Management
Console
Note: For more details, refer to the EMC Symmetrix Management Console
online help, available on EMC Online Support at https://support.emc.com.
Follow instructions to download the help.
To configure storage and port flags and an IP address on a VMAX
array using the Symmetrix Management Console, complete the
following steps:
1. Open the Symmetrix Management Console by using the IP
address of the array.
2. In the Properties tab, left-hand pane, select Symmetrix Arrays >
Directors > Gig-E, to navigate to the VMAX Gig-E storage port,
as shown in Figure 15.
3. Right-click the storage port you want to configure, check Online,
and select Port and Director Configuration > Set Port Attributes
from the drop-down menus, as shown in Figure 15.
Figure 15
EMC Symmetrix Manager Console, Directors
Connecting an iSCSI Windows host to a VMAX array
73
Use Case Scenarios
The Set Port Attributes dialog box displays, as shown in
Figure 16.
Figure 16
Set Port Attributes dialog box
4. In the Set Port Attributes dialog box, select the following, as
shown in Figure 16:
•
•
•
•
Common_Serial_Number
SCSI_3
SPC2_Protocol_Version
SCSI_Support1
Note: Refer to the appropriate host connectivity guide, available on EMC
Online Support at https://support.emc.com, for your operating system
for the correct port attributes to set.
5. In the Set Port Attributes dialog box, enter the following, as
shown in Figure 16:
• For IPv4, enter the IPv4 Address, IPv4 Default Gateway, and
IPv4 Netmask.
• For IPv6, enter the IPv6 Addresses and IPv6 Net Prefix.
74
iSCSI SAN Topologies TechBook
Use Case Scenarios
6. Click Add to Config Session List.
7. In the Symmetrix Manager Console window, select the Config
Session tab, as shown in Figure 17.
Figure 17
Config Session tab
8. In the My Active Tasks tab, click Commit All, as shown in
Figure 18.
Figure 18
My Active Tasks, Commit All
Connecting an iSCSI Windows host to a VMAX array
75
Use Case Scenarios
Solutions Enabler
To configure storage and port flags and an IP address on a VMAX
array using Solutions Enabler, complete the following steps:
◆
“Setting storage port flags and IP address” on page 76
◆
“Setting flags per initiator group” on page 76
◆
“Viewing flags setting for initiator group” on page 77
Setting storage port flags and IP address
Issue the following command:
symconfigure -sid <SymmID> –file <command file> preview|commit
where command file contains:
set port DirectorNum:PortNum
[FlagName=enable|disable][, ...] ] gige
primary_ip_address=IPAddress
primary_netmask=IPAddress
default_gateway=IPAddress
isns_ip_address=IPAddress
primary_ipv6_address=IPAddress
primary_ipv6_prefix=<0 -128>
[fa_loop_id=integer] [hostname=HostName];
For example:
Command file for enabling Common_Serial_Number, SCSI_3,
SPC2_Protocol_Version, and SCSI_Support1 flags and setting IPv6
address and prefix on port 9g:0:
set port 9g:0
Common_Serial_Number=enable, SCSI_3=enable, SPC2_Protocol_Version=enable,
SCSI_Support1=enable gige
primary_ipv6_address=2001:db8:0:f108::1
primary_ipv6_prefix=64;
Setting flags per initiator group
Issue the following command:
symaccess -sid <SymmID> -name <GroupName> -type initiator set ig_flags <on <flag>
<-enable |-disable> | off [flag]>
For example:
Enabling Common_Serial_Number, SCSI_3, SPC2_Protocol_Version
and SCSI_Support1 flags for initiator group SGELI2-83:
symaccess -sid 316 -name SGELI2-83_IG -type initiator set ig_flags on
Common_Serial_Number, SCSI_3, SPC2_Protocol_Version, SCSI_Support1 –enable
76
iSCSI SAN Topologies TechBook
Use Case Scenarios
Viewing flags setting for initiator group
Issue the following command:
symaccess -sid <SymmID> -type initiator show <GroupName> -detail
For example:
symaccess -sid 316 -type initiator show SGELI2-83_IG -detail
Configuring LUN Masking on a VMAX array
The following two methods discussed in this section can be used to
configure LUN Masking on a VMAX array:
◆
◆
Using Symmetrix
Management
Console
Figure 19
“Using Symmetrix Management Console” on page 77
“Using Solutions Enabler” on page 78
To create an initiator group, port group, storage group, and masking
view using the Symmetrix Management Console, refer to the EMC
Symmetrix Management Console online help, available on EMC
Online Support at https://support.emc.com. Follow instructions to
download the help, then refer to the Storage Provisioning section, as
shown in Figure 19.
EMC Symmetrix Management Console, Storage Provisioning
Connecting an iSCSI Windows host to a VMAX array
77
Use Case Scenarios
Using Solutions
Enabler
To create an initiator group, port group, storage group, and masking
view using the Solutions Enabler, refer to the following sections:
◆
◆
◆
◆
“Creating an initiator group” on page 78
“Creating a port group” on page 78
“Creating a storage group” on page 78
“Creating masking view” on page 78
Creating an initiator group
Issue the following command:
symaccess -sid <SymmID> -type initiator -name <GroupName> create
symaccess -sid <SymmID> -type initiator -name -iscsi <iqn> add
For example:
symaccess -sid 316 -type initiator -name SGELI2-83_IG create
symaccess -sid 316 -type initiator -name SGELI2-83_IG -iscsi
iqn.1991-05.com.microsoft:sgeli2-83 add
Creating a port group
Issue the following command:
symaccess -sid <SymmID> -type port -name <GroupName> create
symaccess -sid <SymmID> -type port -name <GroupName> -dirport
<DirectorNum>:<PortNum> add
For example:
symaccess -sid 316 -type port -name SGELI2-83_PG create
symaccess -sid 316 -type port -name SGELI2-83_PG -dirport 9g:0 add
Creating a storage group
Issue the following command:
symaccess -sid <SymmID> -type storage -name <GroupName> create
symaccess -sid <SymmID> -type storage -name -iscsi <iqn> add devs
<SymDevStart>:<SymDevEnd>
For example:
symaccess -sid 316 -type storage -name SGELI2-83_SG create
symaccess -sid 316 -type storage -name SGELI2-83_SG add devs 0047:110
Creating masking view
Issue the following command:
symaccess -sid <SymmID> create view -name <MaskingView> -ig <InitiatorGroup> -pg
<PortGroup> -sg <StorageGroup>
78
iSCSI SAN Topologies TechBook
Use Case Scenarios
For example:
symaccess -sid 316 create view -name SGELI2-83_MV -ig SGELI2-83_IG -pg
SGELI2-83_PG -sg SGELI2-83_SG
Listing masking view
Issue the following command:
symaccess -sid <SymmID> list view -name <MaskingView>
For example:
symaccess -sid 316 list view -name SGELI2-83_MV
For more details, refer to the EMC Solutions Enabler Symmetrix Array
Controls CLI Product Guide, available on EMC Online Support at
https://support.emc.com.
Configuring an IP address on a Windows host
To configure an IP address on a Windows host, complete the
following steps:
Note: Step 1 through Step 5 are applicable to Windows 2008 Server. Other
versions of Windows may be different.
1. Click Start > Control Panel.
2. Click View network status and tasks.
3. Click Change adapter settings.
4. Right-click on the adapter and select Properties.
5. Double-click the Internet Protocol version:
• For IPv4, double-click Internet Protocol Version 4 (TCP/IPv4).
• For IPv6, double-click Internet Protocol Version 6 (TCP/IPv6).
Connecting an iSCSI Windows host to a VMAX array
79
Use Case Scenarios
6. Go to Network Connections and open the IPv6 Properties
window. The Internet Protocol Version 6 (TCP/IPv6) Properties
dialog box opens, as shown in Figure 20.
Figure 20
Internet Protocol Version 6 (TCP/IPv6) Properties dialog box
7. Enter the IPv6 address and the Subnet prefix length.
8. Click OK.
9. Ping the storage port to test connectivity, as shown in Figure 21.
Figure 21
80
Test connectivity
iSCSI SAN Topologies TechBook
Use Case Scenarios
Configuring iSCSI on a Windows host
You can configure iSCSI on a Windows host using the steps provided
in the following sections:
Using Microsoft iSCSI
Initiator GUI
◆
“Using Microsoft iSCSI Initiator GUI” on page 81
◆
“Using Microsoft iSCSI Initiator CLI” on page 93
Note: The screenshots used in this section are taken from the built-in MS
iSCSI Initiator application in Windows 2008 Server. Other versions of
Windows might have different GUI.
This section provides the steps needed for:
◆
“Configuring via Target Portal Discovery” on page 81
◆
“Configuring via iSNS Server” on page 89
Configuring via Target Portal Discovery
To configure iSCSI on Windows via Target Port Discovery, complete
the following steps:
1. Launch the Microsoft iSCSI Initiator GUI.
The iSCSI Initiator Properties window displays, as shown in
Figure 22 on page 82.
Connecting an iSCSI Windows host to a VMAX array
81
Use Case Scenarios
Figure 22
82
iSCSI Initiator Properties window
iSCSI SAN Topologies TechBook
Use Case Scenarios
2. Select the Discovery tab, click Discover Portal, and click OK, as
shown in Figure 23.
Figure 23
Discovery tab, Discover Portal
Connecting an iSCSI Windows host to a VMAX array
83
Use Case Scenarios
The Discover Target Portal dialog box displays, as shown
inFigure 24.
Figure 24
Discover Portal dialog box
3. Enter the IPv6 address of the target and click Advanced.
4. The Advanced Settings window displays, as shown in Figure 25
on page 85.
84
iSCSI SAN Topologies TechBook
Use Case Scenarios
Figure 25
Advanced Settings window
5. In the General tab, choose the Local adapter and Initiator IP
from the pull-down menu. Select Data digest and Header digest,
if required.
6. Click OK to close the Advanced Settings window.
7. Click OK to close the Discover Target Portal window.
Connecting an iSCSI Windows host to a VMAX array
85
Use Case Scenarios
The targets behind the discovered portal now display, as shown
in Figure 26.
Figure 26
Target portals
8. Select Targets tab, as shown in Figure 27.
Figure 27
86
Targets tab
iSCSI SAN Topologies TechBook
Use Case Scenarios
9. Select one Target and click Connect. Repeat for each Target.
The Connect to Target dialog box displays, as shown in Figure 28.
Figure 28
Connect to Target dialog box
10. Select the Add this connection to the list of Favorite Targets
checkbox.
11. Click OK.
The host is connected to the targets, as shown in Figure 29.
Figure 29
Discovered targets
12. Select the Volumes and Devices tab.
Connecting an iSCSI Windows host to a VMAX array
87
Use Case Scenarios
13. Click Auto Configure to bind the volumes, as shown in Figure 30.
Figure 30
88
Volume and Devices tab
iSCSI SAN Topologies TechBook
Use Case Scenarios
14. Open PowerPath. The devices appear, as shown in Figure 31.
Figure 31
Devices
Configuring via iSNS Server
To configure via iSNS Server, complete the following steps:
1. Set the iSNS Server IP address for both storage ports using
Solutions Enabler.
symconfigure -sid 2316 -file isns.txt commit
Execute a symconfigure operation for symmetrix '000192602316' (y/[n]) ? y
A Configuration Change operation is in progress. Please wait...
Establishing a configuration change session...............Established.
Processing symmetrix 000192602316
Performing Access checks..................................Allowed.
Checking Device Reservations..............................Allowed.
Initiating COMMIT of configuration changes................Queued.
COMMIT requesting required resources......................Obtained.
Step 004 of 050 steps.....................................Executing.
Step 017 of 050 steps.....................................Executing.
Step 026 of 050 steps.....................................Executing.
Step 042 of 085 steps.....................................Executing.
Step 060 of 085 steps.....................................Executing.
Step 064 of 085 steps.....................................Executing.
Connecting an iSCSI Windows host to a VMAX array
89
Use Case Scenarios
Step 082 of 085 steps.....................................Executing.
Local: COMMIT............................................Done.
Terminating the configuration change session..............Done.
The configuration change session has successfully completed.
Where isns.txt contains:
set port 10G:0
isns_ip_address=12.10.10.206
Note: iSNS Server IP Address supports only IPv4.
2. Launch the iSNS Server.
The storage ports appear as shown in Figure 32.
Figure 32
iSNS Server Properties window, storage ports
3. Launch the Microsoft iSCSI Initiator GUI.
90
iSCSI SAN Topologies TechBook
Use Case Scenarios
4. Select the Discovery tab, as shown in Figure 33.
Figure 33
Discovery tab
5. Click Add Server. The Add iSNS Server window displays.
6. Enter the IP address for each iSNS Server interface and click OK.
Connecting an iSCSI Windows host to a VMAX array
91
Use Case Scenarios
The iSNS Server is successfully added, as shown in Figure 34.
Figure 34
92
iSNS Server added
iSCSI SAN Topologies TechBook
Use Case Scenarios
7. Return to the iSNS Server. The Initiator has been successfully
added, as shown in Figure 35.
Figure 35
iSNS Server
8. Follow Step 8 on page 96 through Step 10 on page 96 in
“Configuring via Target Portal Discovery,” discussed next.
Using Microsoft iSCSI
Initiator CLI
Steps for configuring iSCSI on a Windows host using Microsoft iSCSI
Initiator CLI are provided in the following sections:
◆
◆
“Configuring via Target Portal Discovery” on page 93
“Configuring via iSNS Server” on page 97
Configuring via Target Portal Discovery
To configure iSCSI on Windows using Microsoft iSCSI Initiator CLI,
complete the following steps:
1. Add the Target Portal for each storage port.
C:\>iscsicli QAddTargetPortal 2001:db8:0:f108::1
Microsoft iSCSI Initiator Version 6.1 Build 7601
The operation completed successfully.
Connecting an iSCSI Windows host to a VMAX array
93
Use Case Scenarios
2. List the Target Portals.
C:\>iscsicli ListTargetPortals
Microsoft iSCSI Initiator Version 6.1 Build 7601
Total of 2 portals are persisted:
Address and Socket
:
Symbolic Name
:
Initiator Name
:
Port Number
:
Security Flags
:
Version
:
Information Specified:
Login Flags
:
Address and Socket
:
Symbolic Name
:
Initiator Name
:
Port Number
:
Security Flags
:
Version
:
Information Specified:
Login Flags
:
2001:db8:0:f108::1 3260
<Any Port>
0x0
0
0x0
0x0
2001:db8:0:f109::1 3260
<Any Port>
0x0
0
0x0
0x0
The operation completed successfully.
3. List the Targets behind the discovered Portals. The Target iqn is
displayed.
C:\>iscsicli ListTargets
Microsoft iSCSI Initiator Version 6.1 Build 7601
Targets List:
iqn.1992-04.com.emc:50000972082431a4
iqn.1992-04.com.emc:50000972082431a0
The operation completed successfully.
4. Get the Target information.
C:\>iscsicli TargetInfo iqn.1992-04.com.emc:50000972082431a0
Microsoft iSCSI Initiator Version 6.1 Build 7601
Discovery Mechanisms :
"SendTargets:*2001:db8:0:f108::1 0003260 Root\ISCSIPRT\0000_0 "
The operation completed successfully.
5. Log in to each Target. The Session Id is created.
C:\>iscsicli QLoginTarget iqn.1992-04.com.emc:50000972082431a0
Microsoft iSCSI Initiator Version 6.1 Build 7601
94
iSCSI SAN Topologies TechBook
Use Case Scenarios
Session Id is 0xfffffa8007af4018-0x400001370000000c
Connection Id is 0xfffffa8007af4018-0xb
The operation completed successfully.
6. Display the Target Mappings assigned to all LUNs that the
initiators have logged in to.
C:\>iscsicli ReportTargetMappings
Microsoft iSCSI Initiator Version 6.1 Build 7601
Total of 2 mappings returned
Session Id
: fffffa8007af4018-400001370000000c
Target Name
: iqn.1992-04.com.emc:50000972082431a0
Initiator
: Root\ISCSIPRT\0000_0
Initiator Scsi Device : \\.\Scsi9:
Initiator Bus
: 0
Initiator Target Id
: 0
Target Lun: 0x100 <--> OS Lun: 0x1
Target Lun: 0x200 <--> OS Lun: 0x2
…
…
Session Id
: fffffa8007af4018-400001370000000d
Target Name
: iqn.1992-04.com.emc:50000972082431a4
Initiator
: Root\ISCSIPRT\0000_0
Initiator Scsi Device : \\.\Scsi9:
Initiator Bus
: 0
Initiator Target Id
: 1
Target Lun: 0x0 <--> OS Lun: 0x0
Target Lun: 0x100 <--> OS Lun: 0x1
…
…
7. The mappings obtained through the QLoginTarget command are
not persistent and will be lost at reboot. To have a persistent
connection, use the PersistentLoginTarget command for each
Target.
Note: The value T means the LUN is exposed as a device. Otherwise, the
LUN is not exposed and the only operations that can be performed are
SCSI Inquiry, SCSI Report LUNS, and SCSI Read Capacity, and only
through the iSCSI discovery service since the operating system is not
aware of the existence of the device.
C:\>iscsicli PersistentLoginTarget iqn.1992-04.com.emc:50000972082431a0
T * * * * * * * * * * * * * * * 0
Microsoft iSCSI Initiator Version 6.1 Build 7601
The operation completed successfully.
Connecting an iSCSI Windows host to a VMAX array
95
Use Case Scenarios
8. List the Persistent Targets.
C:\>iscsicli ListPersistentTargets
Microsoft iSCSI Initiator Version 6.1 Build 7601
Total of 2 persistent targets
Target Name
: iqn.1992-04.com.emc:50000972082431a0
Address and Socket
: 2001:0db8:0000:f108:0000:0000:0000:0001%0 3260
Session Type
: Data
Initiator Name
: Root\ISCSIPRT\0000_0
Port Number
: <Any Port>
Security Flags
: 0x0
Version
: 0
Information Specified : 0x20
Login Flags
: 0x8
Username
:
Target Name
Address and Socket
Session Type
Initiator Name
Port Number
Security Flags
Version
Information Specified
Login Flags
Username
:
:
:
:
:
:
:
:
:
:
iqn.1992-04.com.emc:50000972082431a4
2001:0db8:0000:f109:0000:0000:0000:0001%0 3260
Data
Root\ISCSIPRT\0000_0
<Any Port>
0x0
0
0x20
0x8
The operation completed successfully.
9. Bind the Persistent Devices to cause the iSCSI Initiator service to
determine which disk volumes are currently exposed by the
active iSCSI sessions for all initiators and make that list persistent.
The next time the iSCSI Initiator service starts, it will wait for all
those volumes to be mounted before completing its service
startup.
C:\>iscsicli BindPersistentDevices
Microsoft iSCSI Initiator Version 6.1 Build 7601
The operation completed successfully.
10. Display the list of volumes and devices that are currently
persistently bound by the iSCSI initiator.
C:\>iscsicli ReportPersistentDevices
Microsoft iSCSI Initiator Version 6.1 Build 7601
Persistent Volumes
"\\?\scsi#disk&ven_emc&prod_power&#{4a54205a-c920-4e28-88c5-9a6296a74b0b}&emcp&p
ower123#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}"
96
iSCSI SAN Topologies TechBook
Use Case Scenarios
"\\?\scsi#disk&ven_emc&prod_power&#{4a54205a-c920-4e28-88c5-9a6296a74b0b}&emcp&p
ower63#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}"
…
…
Configuring via iSNS Server
To configure iSCSI via iSNS Server, complete the following steps:
1. Set the iSNS Server IP address for both storage ports as described
in “Configuring via iSNS Server” on page 97.
2. Add both iSNS Server interfaces.
C:\copa>iscsicli AddiSNSServer 2001:db8:0:f108::3
Microsoft iSCSI Initiator Version 6.1 Build 7601
The operation completed successfully.
3. List the iSNS Servers.
C:\copa>iscsicli ListiSNSServers
Microsoft iSCSI Initiator Version 6.1 Build 7601
2001:db8:0:f108::3
2001:db8:0:f109::3
The operation completed successfully.
4. Follow Step 3 on page 94 through Step 10 on page 96 in
“Configuring via Target Portal Discovery.”
Configuring Jumbo frames
To configure Jumbo frames, set the MTU on the host, switch (host and
storage side) and storage port to 9000.
The switch port MTU can be set using the switch admin tool.
Contact your EMC Customer Service Engineer to set the storage port
MTU.
Setting MTU on a Windows host
The MTU can be changed by editing the HBA driver properties.
Consult your driver documentation for more information.
The netsh command line scripting utility can also be used to set the
MTU. The usage of the netsh utility described next applies to
Connecting an iSCSI Windows host to a VMAX array
97
Use Case Scenarios
Windows 2008 Server and may not be applicable for other versions of
Windows.
To set MTU on Windows, complete the following steps:
1. Show the MTU.
C:\>netsh interface ipv6 show subinterface
MTU
MediaSenseState
Bytes In
------ --------------- --------1500
1
110592960
1500
1
2073668
1500
1
796432
Bytes Out
--------22062103
894650
3343627
Interface
------------CORP
1G iSCSI 1
1G iSCSI 2
2. Change the MTU of "1G iSCSI 1" interface to 9000.
C:\>netsh interface ipv6 set subinterface "1G iSCSI 1" mtu=9000 store=persistent
Ok.
3. Show the updated MTU.
C:\>netsh interface ipv6 show subinterface
MTU
MediaSenseState
Bytes In
Bytes Out
------ --------------- --------- --------1500
1
110592960
22062103
9000
1
2073668
894650
1500
1
796432
3343627
98
iSCSI SAN Topologies TechBook
Interface
------------CORP
1G iSCSI 1
1G iSCSI 2
Use Case Scenarios
Connecting an iSCSI Linux host to a VMAX array
Figure 36 shows a Linux host connected to a VMAX array. This
scenario will be used in this use case study. This section includes the
following information:
◆
“Configuring storage port flags and an IP address on a VMAX
array” on page 100
◆
“Configuring LUN Masking on a VMAX array” on page 107
◆
“Configuring an IP address on a Linux host” on page 110
◆
“Configuring CHAP on the Linux host” on page 113
◆
“Configuring iSCSI on a Linux host using Linux iSCSI Initiator
CLI” on page 113
◆
“Configuring Jumbo frames” on page 115
◆
“Setting MTU on a Linux host” on page 115
VMAX
Subnet IPv4
IPV4
eth0: 10.20.5.210
IPV4
eth1: 10.20.20.210
Subnet IPv4
Switch
SE 7G:0
10.20.5.201
SE 7H:0
10.20.20.100
Linux Server
PowerPath
ICO-IMG-000987
Figure 36
Linux hosts connected to a VMAX array with 10 G connectivity
This setup consists of a Linux host connected to a VMAX array as
follows:
1. The Linux host is connected via two paths with 10 G iSCSI and
IPv4. CHAP Authentication is used.
2. The VMAX array is connected via two paths for 1 G and 10 G
iSCSI each.
3. PowerPath is installed on the host.
Connecting an iSCSI Linux host to a VMAX array
99
Use Case Scenarios
Configuring storage port flags and an IP address on a VMAX array
The following two methods discussed in this section can be used to
configure storage and port flags and an IP address on a VMAX array:
Symmetrix
Management
Console
◆
“Symmetrix Management Console” on page 100
◆
“CHAP” on page 104
◆
“Solutions Enabler” on page 106
Note: For more details, refer to the EMC Symmetrix Management Console
online help, available on EMC Online Support at https://support.emc.com.
Follow instructions to download the help.
To configure storage and port flags and an IP address on a VMAX
array using the Symmetrix Management Console, complete the
following steps:
1. Open the Symmetrix Management Console by using the IP
address of the array.
2. In the Properties tab, left-hand pane, select Symmetrix Arrays >
Directors > Gig-E, to navigate to the VMAX Gig-E storage port,
as shown in Figure 37.
3. Right-click the storage port you want to configure, check Online,
and select Port and Director Configuration > Set Port Attributes
from the drop-down menu, as shown in Figure 37.
100
iSCSI SAN Topologies TechBook
Use Case Scenarios
Figure 37
Set port attributes
IMPORTANT
Take the port offline if the IP address is being changed. Select
Port and Director Configuration and uncheck Online.
Connecting an iSCSI Linux host to a VMAX array
101
Use Case Scenarios
The Set Port Attributes dialog box displays, as shown in
Figure 38.
Figure 38
Set Port Attributes dialog box
4. In the Set Port Attributes dialog box, select the following, as
shown in Figure 38:
•
•
•
•
Common_Serial_Number
SCSI_3
SPC2_Protocol_Version
SCSI_Support1
Note: Refer to the appropriate host connectivity guide, available on EMC
Online Support at https://support.emc.com, for your operating system
for the correct port attributes to set.
102
iSCSI SAN Topologies TechBook
Use Case Scenarios
5. In the Set Port Attributes dialog box, enter the following, as
shown in Figure 38:
• For IPv4, enter the IPv4 Address, IPv4 Default Gateway, and
IPv4 Netmask.
• For IPv6, enter the IPv6 Addresses and IPv6 Net Prefix.
6. Click Add to Config Session List.
7. In the Symmetrix Manager Console window, select the Config
Session tab, as shown in Figure 39.
Figure 39
Config Session tab
Connecting an iSCSI Linux host to a VMAX array
103
Use Case Scenarios
8. In the My Active Tasks tab, click Commit All, as shown in
Figure 40.
Figure 40
CHAP
My Active Tasks, Commit All
To configure CHAP, complete the following steps.
1. From the Symmetrix Management Console, right-click on the
storage port you want to configure and select Port and Director
Configuration > CHAP Authentication for CHAP-related
information, as shown in
104
iSCSI SAN Topologies TechBook
Use Case Scenarios
Figure 41
CHAP authentication
The following dialog box displays.
Figure 42
Director Port CHAP Authentication Enable/Disable dialog box
2. Click OK.
Connecting an iSCSI Linux host to a VMAX array
105
Use Case Scenarios
The following dialog box displays.
Figure 43
Director Port CHAP Authentication Set dialog box
3. A Credential and Secret must be configured for the CHAP to be
operational.
Solutions Enabler
To configure storage and port flags and an IP address on a VMAX
array using Solutions Enabler, complete the following steps:
◆
“Setting storage port flags and IP address” on page 106
◆
“Setting flags per initiator group” on page 107
◆
“Viewing flags setting for initiator group” on page 107
Setting storage port flags and IP address
Issue the following command:
symconfigure -sid <SymmID> –file <command file> preview|commit
where command file contains:
set port DirectorNum:PortNum
[FlagName=enable|disable][, ...] ] gige
primary_ip_address=IPAddress
primary_netmask=IPAddress
default_gateway=IPAddress
isns_ip_address=IPAddress
primary_ipv6_address=IPAddress
primary_ipv6_prefix=<0 -128>
[fa_loop_id=integer] [hostname=HostName];
106
iSCSI SAN Topologies TechBook
Use Case Scenarios
For example:
Command file for enabling Common_Serial_Number, SCSI_3,
SPC2_Protocol_Version and SCSI_Support1 flags and setting IPv4
address and prefix on port 9g:0:
set port 7G:0
Common_Serial_Number=enable, SCSI_3=enable, SPC2_Protocol_Version=enable,
SCSI_Support1=enable gige
primary_ip_address= 10.20.5.201
primary_netmask = 255.255.255.0
default_gateway=10.20.5.1
set port 7H:0
Common_Serial_Number=enable, SCSI_3=enable, SPC2_Protocol_Version=enable,
SCSI_Support1=enable gige
primary_ip_address= 10.20.20.100
primary_netmask = 255.255.255.0
default_gateway=10.20.20.1
Setting flags per initiator group
Issue the following command:
symaccess -sid <SymmID> -name <GroupName> -type initiator set ig_flags <on <flag>
<-enable |-disable> | off [flag]>
For example:
Enabling Common_Serial_Number, SCSI_3, SPC2_Protocol_Version
and SCSI_Support1 flags for initiator group SGELI2-83:
symaccess -sid 316 -name Linux10G_IG -type initiator set ig_flags on
Common_Serial_Number,SCSI_3,SPC2_Protocol_Version,SCSI_Support1 –enable
Viewing flags setting for initiator group
Issue the following command:
symaccess -sid <SymmID> -type initiator show <GroupName> -detail
For example:
symaccess -sid 316 -type initiator show Linux10G_IG -detail
Configuring LUN Masking on a VMAX array
The following methods discussed in this section can be used to
configure LUN Masking on a VMAX array:
◆
“Using Symmetrix Management Console” on page 108
◆
“Using Solutions Enabler” on page 108
◆
“Using SYMCLI for VMAX” on page 110
Connecting an iSCSI Linux host to a VMAX array
107
Use Case Scenarios
Using Symmetrix
Management
Console
Figure 44
Using Solutions
Enabler
To create an initiator group, port group, storage group, and masking
view using the Symmetrix Management Console, refer to the EMC
Symmetrix Management Console online help, available on EMC
Online Support at https://support.emc.com. Follow instructions to
download the help, then refer to the Storage Provisioning section, as
shown in Figure 44.
EMC Symmetrix Management Console, Storage Provisioning
To create an initiator group, port group, storage group, and masking
view using the Solutions Enabler, refer to the following sections:
◆
◆
◆
◆
“Creating an initiator group” on page 108
“Creating a port group” on page 109
“Creating a storage group” on page 109
“Creating masking view” on page 109
Creating an initiator group
Issue the following command:
symaccess -sid <SymmID> -type initiator -name <GroupName> create
symaccess -sid <SymmID> -type initiator -name -iscsi <iqn> add
108
iSCSI SAN Topologies TechBook
Use Case Scenarios
For example:
symaccess -sid 3003 -type initiator -name Linux10G_IG create
symaccess -sid 3003 -type initiator -name Linux10G_IG -iscsi
iqn.1994-05.com.redhat:1339be8c4613 add
Creating a port group
Issue the following command:
symaccess -sid <SymmID> -type port -name <GroupName> create
symaccess -sid <SymmID> -type port -name <GroupName> -dirport
<DirectorNum>:<PortNum> add
For example:
symaccess -sid 3003 -type port -name Linux10G_PG create
symaccess -sid 3003 -type port -name Linux10G_PG -dirport 7G:0 add
Creating a storage group
Issue the following command:
symaccess -sid <SymmID> -type storage -name <GroupName> create
symaccess -sid <SymmID> -type storage -name -iscsi <iqn> add devs
<SymDevStart>:<SymDevEnd>
For example:
symaccess -sid 3003 -type storage -name Linux10G_SG create
symaccess -sid 3003 -type storage -name Linux10G_SG add devs 816:842
Creating masking view
Issue the following command:
symaccess -sid <SymmID> create view -name <MaskingView> -ig <InitiatorGroup> -pg
<PortGroup> -sg <StorageGroup>
For example:
symaccess -sid 316 create view -name SGELI2-83_MV -ig SGELI2-83_IG -pg
SGELI2-83_PG -sg SGELI2-83_SG
Listing masking view
Issue the following command:
symaccess -sid <SymmID> list view -name <MaskingView>
For example:
Symaccess -sid 3003 list view -name Linux10G
Connecting an iSCSI Linux host to a VMAX array
109
Use Case Scenarios
For more details, refer to the EMC Solutions Enabler Symmetrix Array
Controls CLI Product Guide, available on EMC Online Support at
https://support.emc.com.
Using SYMCLI for
VMAX
1. To enable CHAP on an iSCSI initiator, use the following form:
symaccess -sid SymmID -iscsi iscsi enable chap
For example:
# symaccess -sid 3003 -iscsi iqn.1994-05.com.redhat:1339be8c4613 enable CHAP
2. To enable CHAP on a specific director and port, use the following
form:
symaccess -sid SymmID [-dirport Dir:Port] enable chap
For example:
# symaccess -sid 3003-dirport 7G:0 enable chap
3. To set the CHAP credential and secret on a director and port, use
the following form:
symaccess -sid SymmID -dirport Dir:Port set chap -cred Credential -secret Secret
For example:
# symaccess -sid SymmID -dirport 7G:0 set chap -cred chap -secret abcdefgh
4. To disable CHAP on a specific director and port, use the
following form:
symaccess -sid SymmID [-dirport Dir:Port] disable chap
5. To delete CHAP from a specific director and port, use the
following form:
symaccess -sid SymmID [-dirport Dir:Port] delete chap
Configuring an IP address on a Linux host
To configure an IP address on a Linux host, complete the following
steps:
110
iSCSI SAN Topologies TechBook
Use Case Scenarios
1. Issue the ifconfig command to verify the present IP addresses, as
show in Figure 45:
Figure 45
Verify IP addresses
2. Use the following command to shut down the interface:
# ifconfig eth0 down
# ifconfig eth1 down
3. Use the following command to set the IP address and bring the
port back up.
# ifconfig eth0 10.20.5.210 netmask 255.255.255.0 up
# ifconfig eth1 10.20.20.210 netmask 255.255.255.0 up
4. Check the parameters on the interface located.
/etc/sysconfig/network-scripts
Note: This folder contains all ifcfg-eth files. Make changes to the file
appropriate to the interface being used.
For example, the following lists the properties on the interface
eth0. To enable the IP address to be present with each reboot, set
"ONBOOT=yes" .
[root@i2051210 network-scripts]# more ifcfg-eth0
DEVICE="eth0"
NM_CONTROLLED="yes"
ONBOOT=yes
TYPE=Ethernet
BOOTPROTO=none
Connecting an iSCSI Linux host to a VMAX array
111
Use Case Scenarios
IPADDR=10.20.5.210
PREFIX=24
GATEWAY:10.20.20.1
DEFROUTE=no
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03
GATEWAY=10.246.51.1
HWADDR=00:00:C9:C0:5E:90
5. Verify the IP address by issuing the following command:
[root@i2051210 ~]# ifconfig
eth0
Link encap:Ethernet HWaddr 00:00:C9:C0:5E:90
inet addr:10.20.5.210 Bcast:10.20.5.255 Mask:255.255.255.0
inet6 addr: fe80::200:c9ff:fec0:5e90/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:17 errors:0 dropped:0 overruns:0 frame:0
TX packets:29 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4149 (4.0 KiB) TX bytes:4604 (4.4 KiB)
eth1
Link encap:Ethernet HWaddr 00:00:C9:C0:5E:92
inet addr:10.20.20.210 Bcast:10.20.20.255 Mask:255.255.255.0
inet6 addr: fe80::200:c9ff:fec0:5e92/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:29 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:243 (243.0 b) TX bytes:4596 (4.4 KiB)
6. Add the IPv6 address, if needed, by using the following
commands:
ifconfig eth0 inet6 add 2001:0db8:0:f101::1/6
ifconfig eth1 inet6 add 2001:0db8:0:f101::2/6
112
iSCSI SAN Topologies TechBook
Use Case Scenarios
7. Ping the storage port to test connectivity, as shown in Figure 46.
Figure 46
Test connectivity
Configuring CHAP on the Linux host
To configure CHAP on the Linux host, complete the following steps:
1. Configure Credential and Secret on the host
(/etc/iscsi/iscsi.conf).
node.session.auth.authmethod = CHAP
node.session.auth.username = chap
node.session.auth.password = abcdefgh
2. Restart the iSCSI service.
service open-iscsi restart
Configuring iSCSI on a Linux host using Linux iSCSI Initiator CLI
Complete the following steps to configure iSCSI on a Linux host
using Linux iSCSI Initiator CLI:
1. Issue the following commands to discover the target devices:
# iscsiadm -m discovery -t sendtargets -p 10.20.5.201
10.20.5.201:3260,1 iqn.1992-04.com.emc:50000972082eed98
# iscsiadm -m discovery -t sendtargets -p 10.20.20.100
10.20.20.100:3260,1 iqn.1992-04.com.emc:50000972082eedd8
Connecting an iSCSI Linux host to a VMAX array
113
Use Case Scenarios
2. Issue the following command to print out the nodes that have
been discovered:
./iscsiadm
-m node
# iscsiadm -m node
10.20.20.100:3260,1 iqn.1992-04.com.emc:50000972082eedd8
10.20.5.201:3260,1 iqn.1992-04.com.emc:50000972082eed98
3. Log in by take the ip, port, and target name from the above
example and run:
./iscsiadm -m node -T targetname -p ip:port -l
# iscsiadm --mode node --targetname iqn.1992-04.com.emc:50000972082eedd8 --portal
10.20.20.100 --login
Logging in to [iface: default, target: iqn.1992-04.com.emc:50000972082eedd8,
portal: 10.20.20.100,3260]
Login to [iface: default, target: iqn.1992-04.com.emc:50000972082eedd8, portal:
10.20.20.100,3260] successful.
# iscsiadm --mode node --targetname iqn.1992-04.com.emc:50000972082eed98 --portal
10.20.5.201 --login
Logging in to [iface: default, target: iqn.1992-04.com.emc:50000972082eed98,
portal: 10.20.5.201,3260]
Login to [iface: default, target: iqn.1992-04.com.emc:50000972082eed98, portal:
10.20.5.201,3260] successful.
4. Issue the following command to show all records in discovery
database and show the targets discovered from each record:
./iscsiadm -m discovery -P 1
# iscsiadm -m discovery -P 1
SENDTARGETS:
DiscoveryAddress: 10.20.20.100,3260
Target: iqn.1992-04.com.emc:50000972082eedd8
Portal: 10.20.20.100:3260,1
Iface Name: default
DiscoveryAddress: 10.20.5.200,3260
DiscoveryAddress: 10.20.5.201,3260
Target: iqn.1992-04.com.emc:50000972082eed98
Portal: 10.20.5.201:3260,1
Iface Name: default
iSNS:
No targets found.
STATIC:
No targets found.
FIRMWARE:
No targets found.
114
iSCSI SAN Topologies TechBook
Use Case Scenarios
Configuring Jumbo frames
To configure Jumbo frames, set the MTU on the host, switch (host and
storage side) and storage port to 9000.
The switch port MTU can be set using the switch admin tool.
Contact your EMC Customer Service Engineer to set the storage port
MTU.
Setting MTU on a Linux host
The MTU can be changed by editing the HBA driver properties.
Consult your driver documentation for more information.
The netsh command line scripting utility can also be used to set the
MTU. The usage of the netsh utility described next applies to a Linux
Server and may not be applicable for other versions of Linux.
To set MTU on Linux, complete the following steps:
1. To show the MTU issue the following command.
Note: By default the MTU size is set to 1500 MTU.
Ip link list
2. To change the MTU issue the following command for the 10G
iSCSI initiator Ethernet interface on the Linux host.
ifconfig eth0 mtu 9000
Connecting an iSCSI Linux host to a VMAX array
115
Use Case Scenarios
3. To make the changes to the MTU persistent upon reboot, change
the "ifcfg_eth*" file associated with the interface.
4. To show the updated MTU, issue the following command
Ip link list
116
iSCSI SAN Topologies TechBook
Use Case Scenarios
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
This section contains the following information:
◆
“Prerequisites” on page 117
◆
“Configuring storage system iSCSI front-end ports” on page 118
◆
“Assigning an IP address to each NIC or iSCSI HBA in a
Windows Server 2008” on page 123
◆
“Configuring iSCSI initiators for a configuration without iSNS”
on page 126
◆
“Registering the server with the storage system” on page 142
◆
“Setting storage system failover values for the server initiators
with Unisphere” on page 144
◆
“Configuring the storage group” on page 159
◆
“iSCSI CHAP authentication” on page 172
Figure 47 will be used in the examples presented in this section.
VNX
IPV4
10.1.1.98
IPV4
192.168.1.98
Windows Server
10.1.1.198
Switch
192.168.1.198
PowerPath
ICO-IMG-001030
Figure 47
Windows host connected to a VNX array with 1 G/ 10 G connectivity
Prerequisites
Before you begin, you must complete the cabling of the iSCSI
front-end data ports to the server ports.
Note: The 10 GbE iSCSI modules requires EMC FLARE® Operating
Environment (OE) version 04.29.000.5.0xx or later.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
117
Use Case Scenarios
IMPORTANT
1 GbE iSCSI ports require Ethernet LAN cables and 10 GbE iSCSI
ports require fibre optical cables for Ethernet transmission.
For 1 Gb transmission, you need CAT 5 Ethernet LAN cables for
10/100 transmission or CAT 6 cables. These cables can be up to 100
meters long.
For 10 Gb Ethernet transmission, you need fibre optical cables for a
fibre optic infrastructure or active twinaxial cables for an active
twinaxial infrastructure. EMC strongly recommends you use OM3
50 µm cables for all optical connections.
An active twinaxial infrastructure is supported for switch
configurations only.
For cable specifications, refer to the Technical Specifications for your
storage system. You can generate an up-to-date version of the these
specification using the Learn about storage system link on the
storage system support website.
For high availability:
◆
Connect one or more iSCSI front-end data ports on SP A to ports
on the switch or router and connect the same number of iSCSl
front-end data ports on SP B to ports on the same switch or router
or on another switch or router, if two switches or routers are
available.
◆
For a multiple NIC or iSCSI HBA server, connect one or more NIC
or iSCSI ports to ports on the switch or router and connect the
same number NIC or iSCSI HBA ports to ports on the same
switch or router or on another switch or router, if two switches or
routers are available.
Configuring storage system iSCSI front-end ports
To configure storage system iSCSI front-end ports, complete the
following steps:
1. Start Unisphere by entering the IP address of one of the storage
system SP in an Internet browser that you are trying to manage.
2. Enter your user name and password.
118
iSCSI SAN Topologies TechBook
Use Case Scenarios
3. Click Login.
4. From Unisphere, select System > Hardware > Storage Hardware.
Figure 48
Unisphere, System tab
5. Identify the storage system iSCSI front-end ports by clicking SPs>
SP A/B > IO Modules > Slot > Port <#> in the Hardware
window.
The example used here is SPs> SP A > IO Modules > Slot A4 >
Port 0.
The Properties message box will display.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
119
Use Case Scenarios
Figure 49
Message box
6. Click OK.
7. Highlight the iSCSI front-end port that you want to configure and
click Properties.
120
iSCSI SAN Topologies TechBook
Use Case Scenarios
The iSCSI Port Properties window displays.
Figure 50
iSCSI Port Properties window
8. Click Add in Virtual Port Properties to assign IP address to the
port. The iSCSI Virtual Port Properties window displays.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
121
Use Case Scenarios
Figure 51
iSCSI Virtual Port Properties window
9. Click OK and the close all open dialog boxes.
122
iSCSI SAN Topologies TechBook
Use Case Scenarios
A Warning message displays asking if you wish to continue.
Figure 52
Warning message
10. Click OK.
A message showing successful completion displays.
Figure 53
Successful message
11. Click OK.
The iSCSI Port Properties window displays the added virtual
ports in the Virtual Port Properties area.
Assigning an IP address to each NIC or iSCSI HBA in a Windows Server 2008
To assign an IP address to each NIC or iSCSI HBA in a Windows
Server 2008 that will be connected to the storage system, complete the
following steps.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
123
Use Case Scenarios
1. Click Start > Control Panel > Network and Sharing Center >
Manage Network Connections.
The Network Connections window displays.
Figure 54
Control Panel, Network Connections window
2. Locate 10 GbE interfaces in the Network Connections dialog box.
3. Identify the NIC or iSCSI HBA which you want to set the IP
address in the dialog (QLogic 10 Gb PCI Ethernet Adapter) and
right-click the NIC or iSCSI HBA.
The Local Area Connection Properties dialog box displays.
124
iSCSI SAN Topologies TechBook
Use Case Scenarios
Figure 55
Local Area Connection Properties dialog box
4. Select the Internet Protocol Version 4 (TCP/IPv4) entry in the list
and then click Properties.
The Internet Protocol Version 4 (TCP/IPv4) Properties dialog
box displays.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
125
Use Case Scenarios
Figure 56
Internet Protocol Version 4 (TCP/IPv4) Properties dialog box
5. In the General tab, select Use the following IP address and enter
the appropriate IP address and subnet mask of the adapter in the
IP address and Subnet mask fields.
6. Click OK and the close all open dialog boxes.
7. Repeat these steps for any other iSCSI adapters in the host.
Configuring iSCSI initiators for a configuration without iSNS
Before an iSCSI initiator can send data to or receive data from the
storage system, you must configure the network parameters for the
NIC or HBA iSCSI initiators to connect with the storage-system SP
iSCSI targets.
You may need to install the Microsoft iSCSI Initiator software. This
can be downloaded from http://www.microsoft.com.
126
iSCSI SAN Topologies TechBook
Use Case Scenarios
Note: Some operating systems, such as Microsoft Windows 2008 (used in this
example) have bundled the iSCSI initiator with the OS. As a result, it will not
need to be installed and can be accessed directly from Start > Administrative
Tools > iSCSI Initiator.
There are two ways to configure iSCSI initiators on a Windows server
to connect to the storage-system iSCSI targets:
◆
Using Unisphere Server Utility
You can register the r server's NICs or iSCSI HBAs with the
storage system. Refer to “Using Unisphere Server Utility” on
page 127.
◆
Using Microsoft iSCSI initiator
If you are an advanced user, you can configure iSCSI initiators to
connect to the targets. Refer to “Successful logon message” on
page 134.
Using Unisphere Server Utility
To configure iSCSI initiators on a Windows server to connect to the
storage-system iSCSI targets using Unisphere Service Utility,
complete the following steps:
1. On the server, open the Unisphere Server Utility. The EMC
Unisphere Server Utility window displays.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
127
Use Case Scenarios
Figure 57
EMC Unisphere Server Utility welcome window
2. Select Configure iSCSI Connections on this server and click
Next.
128
iSCSI SAN Topologies TechBook
Use Case Scenarios
The next window displays.
Figure 58
EMC Unisphere Server Utility window, Configure iSCSI Connections
3. Select Configure iSCSI Connections and click Next.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
129
Use Case Scenarios
The iSCSI Targets and Connections window displays.
Figure 59
iSCSI Targets and Connections window
4. Select one of the following options to discover the iSCSI target
ports on the connected storage systems:
• Discover iSCSI targets on this subnet
130
iSCSI SAN Topologies TechBook
Use Case Scenarios
Scans the current subnet for all connected iSCSI
storage-system targets. The utility scans the subnet in the
range from 1 to 255. For example, if the current subnet is
10.1.1, the utility will scan the IP addresses from 10.1.1.1 to
10.1.1.255.
Figure 60
Discover iSCSI targets on this subnet
• Discover iSCSI targets for this target portal
Discovers targets known to the specified iSCSI SP data port.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
131
Use Case Scenarios
Figure 61
Discover iSCSI targets for this target portal
5. Click Next.
132
iSCSI SAN Topologies TechBook
Use Case Scenarios
The iSCSI Targets window displays.
Figure 62
iSCSI Targets window
6. For each target you want to log in to, complete the following:
a. In the iSCSI Targets window, select the IP address of the
inactive target.
b. Under Login Options, select Also login to peer iSCSI target
for High Availability (recommended) if the peer iSCSI target
is listed.
c. Select a Server Network Adapter IP address from the
drop-down list if you have the appropriate failover software,
such as EMC PowerPath.
Note: The IP Address used should be the IP Address of the Adapter
that is on the same Network as the target. In this case, you would
select the IP Address of 10.1.1.98 to access the Target at the IP
Address of 10.1.1.198.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
133
Use Case Scenarios
d. If you selected Also login to peer iSCSI target for High
Availability (recommended), leave the Server Network
Adapter IP set to Default to allow the iSCSI initiator to
automatically fail over to an available NIC in the event of a
failure.
This option allows the utility to create a login connection to
the peer target so if the target you selected becomes
unavailable, data will continue to the peer target.
e. Click Logon to connect to the selected target.
A message displays showing the logon as successful.
Figure 63
Successful logon message
f. Click OK. The iSCSI Targets window (Figure 62 on page 133)
displays again.
g. Click Next.
The Server Utility window displays.
134
iSCSI SAN Topologies TechBook
Use Case Scenarios
Figure 64
Server registration window
7. In the server registration window, click Next to send the updated
information to the storage system.
A message showing a successful update displays.
Note: If you have the host agent installed on the server, you will get an
error message indicating that the host agent is running and you cannot
use the server utility to update information to the storage system; the
host agent will do this automatically.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
135
Use Case Scenarios
Figure 65
Successfully updated message
8. Click Finish.
9. Repeat steps 2-8 for any additional iSCSI Targets.
Using Microsoft iSCSI initiator
To configure iSCSI initiators on a Windows server to connect to the
storage-system iSCSI targets using Microsoft isCSI initiator software,
complete the following steps:
1. Open the Microsoft iSCSI Initiator properties dialog by clicking
Start > Administrative Tools >iSCSI Initiator.
The Microsoft iSCSI Initiator Properties dialog box displays.
136
iSCSI SAN Topologies TechBook
Use Case Scenarios
Figure 66
Microsoft iSCSI Initiator Properties dialog box
2. Add an iSCSI Target by clicking the Discovery Tab and then Add
Portal.
Figure 67
Discovery tab
The Add Target Portal dialog box displays.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
137
Use Case Scenarios
Figure 68
Add Target Portal dialog box
3. Click Advance.
The Advanced Settings dialog box displays.
Figure 69
138
Advanced Settings dialog box, General tab
iSCSI SAN Topologies TechBook
Use Case Scenarios
a. In the Local Adapter field, choose Microsoft iSCSI Initiator
from the pull-down list.
b. In the Source IP field, choose the IP Address of the adapter
that will be used to access this target.
c. In the Target portal field, choose the IP address of the target
that will be used to access by this source.
Note: The IP Address used should be the IP Address of the Adapter
that is on the same Network as the target. In this case, you would
select the IP Address of 10.1.1.98 to access the Target at the IP
Address of 10.1.1.198.
d. Click OK. You are returned to the iSCSI Initiator Properties,
Discovery tab.
Figure 70
iSCSI Initiator Properties dialog box, Discovery tab
4. Repeat steps 2-3 for any additional iSCSI Targets.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
139
Use Case Scenarios
5. In the iSCSI Initiator Properties dialog box, click the Targets tab
and the iSCSI Targets should be displayed as Inactive, as shown
in Figure 71.
Figure 71
iSCSI Initiator Properties dialog box, Targets tab
6. Select the target in the list and click Logon….
The Log On to Target dialog box displays.
Figure 72
140
Log on to Target dialog box
iSCSI SAN Topologies TechBook
Use Case Scenarios
7. Ensure that the Automatically restore this connection when the
computer starts checkbox is selected. Also check the Enable
multi-path box if PowerPath multi-path software is already
installed on the host.
8. Click OK.
The iSCSI Initiator Properties dialog box, Targets tab displays
again. The target should be shown as Connected.
Other iSCSI targets display as Inactive.
Figure 73
Target, Connected
9. Click OK.
10. Repeat steps 5-9 to configure additional iSCSI Targets.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
141
Use Case Scenarios
Registering the server with the storage system
To register the server using the Unisphere Server Utility on a
Windows server, complete the following steps:
1. On the server, run the Unisphere Server Utility by selecting
Start > Programs > EMC > Unisphere > Unisphere Server
Utility or Start > All Programs > EMC > Unisphere > Unisphere
Server Utility or click the Unisphere Server Utility shortcut icon.
The EMC Unisphere Server Utility, welcome window displays.
Figure 74
EMC Unisphere Server Utility, welcome window
2. In the Unisphere Server Utility dialog box, select Configure
iSCSI Connections on this server and click Next.
The utility automatically scans for all connected storage systems
and lists them under Connected Storage Systems, as shown in
Figure 75.
142
iSCSI SAN Topologies TechBook
Use Case Scenarios
Figure 75
Connected Storage Systems
3. Locate the WWN of the NIC or iSCSI HBA you just installed. The
NIC or iSCSI HBA should appear once for every SP port to which
it is connected.
If the Unisphere Server Utility does not list your storage
processors, verify that your server is properly connected and
zoned to the storage system ports.
4. Click Next to register the server with the storage system.
The utility sends the server's name and IP address of the each
NIC or iSCSI HBA to each storage system. Once the server has
storage on the storage system, the utility also sends the device
name and volume or file system information for each LUN
(virtual disk) in the storage system that the server sees.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
143
Use Case Scenarios
A message displays if the update is successful.
Figure 76
Successfully updated message
5. Click Finish to exit the utility.
Setting storage system failover values for the server initiators with Unisphere
There are tow ways to set failover values for the server initiators with
Unisphere:
◆
Using Failover Setup Wizard
You can configure failover mode for the host initiators. Refer to
“Using Failover Setup Wizard” on page 144.
◆
Using Connectivity Status in Host Management
If you are an advanced user, you can configure failover mode for
the host initiators via connectivity status window. Refer to “Using
Connectivity Status in Host Management” on page 153.
Using Failover Setup Wizard
To use the Unisphere Failover Setup wizard to set the storage system
failover values for all NIC or iSCSI HBA initiators belonging to the
server, complete the following steps:
1. From Unisphere, select All Systems > System List.
144
iSCSI SAN Topologies TechBook
Use Case Scenarios
2. From the Systems page, select the storage system for whose
failover values you want to set.
3. Select the Hosts tab. The following window displays.
Figure 77
EMC Unisphere, Hosts tab
4. Under Wizards, select the Failover Wizard.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
145
Use Case Scenarios
The Start Wizard dialog box displays.
Figure 78
Start Wizard dialog box
5. In the Start Wizard dialog box, read the introduction, and then
click Next.
The Select Host dialog box displays.
146
iSCSI SAN Topologies TechBook
Use Case Scenarios
Figure 79
Select Host dialog box
6. In the Select Host dialog box, select the server you just connected
to the storage system and click Next.
The Select Storage System dialog box displays.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
147
Use Case Scenarios
Figure 80
Select Storage System dialog box
7. Select the storage system and click Next.
The Specify Settings dialog box displays.
148
iSCSI SAN Topologies TechBook
Use Case Scenarios
Figure 81
Specify Settings dialog box
8. Set the following values for the type of software running on the
server.
For a Windows server or Windows virtual machine with
PowerPath, set:
a. Initiator Type to CLARiiON Open
b. Array CommPath to Enabled
c. Failover Mode to:
– 4 if your PowerPath version supports ALUA.
– 1 if your PowerPath version does not support ALUA.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
149
Use Case Scenarios
For information on which versions of PowerPath support
ALUA, refer to the PowerPath release notes on the EMC
Online Support at https://support.emc.com website or to
EMC Knowledgebase solution emc99467.
IMPORTANT
If you enter incorrect values the storage system could become
unmanageable and unreachable by the server and the server's
failover software could stop operating correctly.
If you configured your storage system iSCSI connections to
your Windows virtual machine with NICs, set the storage
system failover values for the virtual machine. If you
configured your storage system iSCSI connections to your
Hyper-V or ESX server, set the storage system failover values
for the Hyper-V or ESX server.
If you have a non-Windows virtual machine or a Windows
virtual machine with iSCSI HBAs, set the storage-system
failover values for the Hyper-V or ESX server.
d. Click Next.
150
iSCSI SAN Topologies TechBook
Use Case Scenarios
A Review and Commit Settings window displays.
Figure 82
Review and Commit Settings
9. Review the configuration and all settings.
• If the settings are incorrect, click Back until you return to the
dialog box in which you need to re-enter the correct values.
• If the settings are correct, click Next.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
151
Use Case Scenarios
If you clicked Next, the wizard displays a confirmation dialog
box.
Figure 83
Failover Setup Wizard Confirmation dialog box
10. Click Yes to continue.
The wizard displays a summary of the values you set for the
storage system.
152
iSCSI SAN Topologies TechBook
Use Case Scenarios
Figure 84
Details from Operation dialog box
11. If the operation failed, return to the wizard. If the operation is
successful, click Finish and close the wizard.
12. Reboot the server for the initiator records to take affect.
Using Connectivity Status in Host Management
To use the Connectivity Status to set the storage system failover
values for all NIC or iSCSI HBA initiators belonging to the server,
complete the following steps:
1. From Unisphere, select All Systems > System List.
2. From the Systems page, select the storage system for whose
failover values you want to set.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
153
Use Case Scenarios
3. Select the Hosts tab. The following window displays.
Figure 85
EMC Unisphere, Hosts tab
4. Under Host Management, select Connectivity Status.
The Connectivity Status window displays.
Figure 86
154
Connectivity Status Window, Host Initiators tab
iSCSI SAN Topologies TechBook
Use Case Scenarios
5. In the Host Initiators tab, select the host name and expand it. The
expanded hosts display.
Figure 87
Expanded hosts
6. Click Edit. The Edit Initiator window displays.
Figure 88
Edit Initiators window
7. Check the boxes of the initiators that you want to edit and set the
following values for the type of software running on the server.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
155
Use Case Scenarios
8. Set the following values for the type of software running on the
server.
For a Windows server or Windows virtual machine with
PowerPath, set:
a. Initiator Type to CLARiiON Open
b. Array CommPath to Enabled
c. Failover Mode to:
– 4 if your PowerPath version supports ALUA.
– 1 if your PowerPath version does not support ALUA.
For information on which versions of PowerPath support
ALUA, refer to the PowerPath release notes on the EMC
Online Support at https://support.emc.com or to EMC
Knowledgebase solution emc99467.
IMPORTANT
If you enter incorrect values the storage system could become
unmanageable and unreachable by the server and the server's
failover software could stop operating correctly.
If you configured your storage system iSCSI connections to
your Windows virtual machine with NICs, set the storage
system failover values for the virtual machine. If you
configured your storage system iSCSI connections to your
Hyper-V or ESX server, set the storage system failover values
for the Hyper-V or ESX server.
If you have a non-Windows virtual machine or a Windows
virtual machine with iSCSI HBAs, set the storage-system
failover values for the Hyper-V or ESX server.
d. Click OK. A confirmation dialog box displays.
156
iSCSI SAN Topologies TechBook
Use Case Scenarios
Figure 89
Confirmation dialog box
9. If the operation is successful, click Yes and close all windows. A
Success message displays.
Figure 90
Success confirmation message
10. Click OK.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
157
Use Case Scenarios
11. You can confirm the change by selecting the initiator and then
clicking Detail in the Host Initiator tab of the Connectivity
Status window.
Figure 91
Connectivity Status window, Host Initiators tab
Initiator details display in the Initiator Information window.
Figure 92
158
Initiator Information window
iSCSI SAN Topologies TechBook
Use Case Scenarios
Configuring the storage group
Before you begin, you need the completed LUN creation according to
your storage provisioning plan. For the detailed information of LUN
provisioning, refer to the VNX/CLARiiON documentation available
on EMC Online Support at https://support.emc.com.
1. Start Unisphere by entering the IP address of one of the storage
system SP in an Internet browser that you are trying to manage.
2. Enter your user name and password.
3. Click Login.
4. From Unisphere, select your system, as shown in Figure 93.
Figure 93
Select system
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
159
Use Case Scenarios
5. The following window displays, as shown in Figure 94.
Figure 94
160
Select Storage Groups
iSCSI SAN Topologies TechBook
Use Case Scenarios
6. Select Hosts > Storage Groups in the top menu. The Storage
Groups window displays, as shown in Figure 95.
Figure 95
Storage Groups window
7. If you have created storage groups, skip to Step 8. If not, complete
the following steps:
a. From the task list, select Storage Groups > Create.
The Create Storage dialog box displays, as shown in
Figure 96.
Figure 96
Create Storage dialog box
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
161
Use Case Scenarios
b. Enter a name for the Storage Group. In this example the name
10Gb_iSCSI_i2051098_Win is used.
c. Choose one of the following options:
– Click OK to create the new storage group and click close
the dialog box; or
– Click Apply to create the new storage group without
closing the dialog box. This allows you to create additional
storage groups.
A message displays showing the storage group creation as
success, as shown in Figure 97.
Figure 97
Confirmation dialog box
d. Choose one of the following options:
– If you want to add LUNs or connect hosts now, click Yes.
– If you want to do add LUNs on your own timeframe, click
No and follow the next steps.
8. From the system page, select your system, then Hosts > Storage
Groups.
9. To connect the servers/hosts, select the storage group you just
created and choose one of the following options:
– Click the connect hosts; or
– Open Properties by clicking Properties or right-clicking
and selecting Properties of the selected storage group, as
shown in Figure 98.
162
iSCSI SAN Topologies TechBook
Use Case Scenarios
Figure 98
Storage Group, Properties
10. Click the Hosts tab from the properties of the storage group to
which you want connect the servers, as shown in Figure 99.
Figure 99
Hosts tab
11. In the Host tab, select the available hosts you want to connect.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
163
Use Case Scenarios
12. Click the arrow to move the host from the Available Hosts
column to the Host to be Connected column and click Apply.
The host displays in the Host to be Connected column, as shown
in Figure 100.
Figure 100
Hosts to be Connected column
13. Click OK. The main Unisphere window displays.
164
iSCSI SAN Topologies TechBook
Use Case Scenarios
14. From the main Unisphere window, connect the LUNs to the
storage group, as shown in Figure 101.
Figure 101
Connect LUNs
From the task list under Storage Groups, select a storage group to
which you want to add LUNs and choose one of the following
options:
– Select Connect LUNs; or
– Click the LUNs tab from the Properties of the storage
group to which you want to add LUNs.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
165
Use Case Scenarios
The LUNs tab displays, as shown in Figure 102.
Figure 102
LUNs tab
15. In the Available LUNs box, select the LUNs that you want to add
and click Add, as shown in Figure 102.
166
iSCSI SAN Topologies TechBook
Use Case Scenarios
The LUNs will appear in the Selected LUNs box, as shown in
Figure 103.
Figure 103
Selected LUNs
16. Click Apply as shown in Figure 103. A confirmation box displays
as shown in Figure 104.
Figure 104
Confirmation dialog box
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
167
Use Case Scenarios
17. Click Yes.
A message displays showing the operation was success, as shown
in Figure 105.
Figure 105
Success message box
18. Click OK. The LUNs are now displayed, as shown in Figure 106.
Figure 106
168
Added LUNs
iSCSI SAN Topologies TechBook
Use Case Scenarios
Making LUNs visible to a Windows server or Window virtual machine with NICs
To allow the Windows server access to the LUNs that you created, use
Windows Computer Management to perform a rescan by completing
the following steps.
1. Choose one of the following options to open the computer
Management window:
– Start > Computer Management
– Right-click My Computer > Manage
The Computer Management window displays, as shown in
Figure 107.
Figure 107
Computer Management window
2. Under the Storage tree, select Disk Management.
3. From the tool bar, select Action > Rescan Disks.
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
169
Use Case Scenarios
The rescanned disks display, as shown in Figure 108.
Figure 108
Rescanned disks
Verifying that PowerPath for Windows servers sees all paths to the LUNs
If you do not already have PowerPath installed, then install
PowerPath by referring to the appropriate PowerPath Installation
and Administration Guide for your operating system. This guide is
available on EMC Online Support at https://support.emc.com.
1. On the Windows server, open the PowerPath Management
Console by choosing one of the following options:
– Click the PowerPath monitor task bar icon; or
– Right-click the icon and select PowerPath Administrator
Figure 109
170
PowerPath icon
iSCSI SAN Topologies TechBook
Use Case Scenarios
The EMC PowerPath Console screen displays, as shown in
Figure 110.
Figure 110
EMC PowerPath Console screen
2. Select Disks and the left pane and the following screen displays,
as shown in Figure 111.
Figure 111
Disks
Configuring the VNX for block 1 Gb/10 Gb iSCSI port
171
Use Case Scenarios
3. Verify that the path metric for each LUN is n/n where n is the
total number of paths to the LUN. Our example shows 2/2.
iSCSI CHAP authentication
The Windows server and the VNX for block support the Challenge
Handshake Authentication Protocol (CHAP) for iSCSI network
security.
CHAP provides a method for the Windows server and VNX for block
to authenticate each other through an exchange of a shared secret (a
security key that is similar to a password), which is typically a string
of 12-16 bytes.
IMPORTANT
If CHAP security is not configured for the VNX for block, any
computer connected to the same IP networks as the VNX for block
iSCSI ports can read form or write to the VNX for block.
Chap has two variants, one-way and reverse CHAP authentication:
◆
In one-way CHAP authentication, CHAP sets up the accounts
that the Windows server uses to connect to the VNX for block.
The VNX for block authenticates the Windows server.
◆
In reverse CHAP authentication, the VNX for block authenticates
the Windows server and the Windows server also authenticates
the VNX for block.
The CX-Series iSCSI Security Setup Guide provides detailed
information regarding CHAP. This can be found on the EMC Online
Support website.
172
iSCSI SAN Topologies TechBook
Use Case Scenarios
Connecting an iSCSI Windows host to an XtremIO array
This section describes how to connect an iSCSI Windows host to an
XtremIO array.
This section includes the following information:
◆
“Prerequisites” on page 173
◆
“Configuring storage system iSCSI portal” on page 174
◆
“Assigning an IP address to each NIC or iSCSI HBA in a
Windows Server 2008” on page 176
◆
“Configuring iSCSI initiator on a Windows host” on page 178
◆
“Configuring LUN masking on an XtremIO array” on page 184
◆
“Detecting the iSCSI LUNs from Windows host” on page 189
Figure 112 shows a Windows host connected to an XtremIO array.
This scenario will be used in this use case study.
Figure 112
Windows host connected to an XtremeIO array
Prerequisites
Before you begin, you must complete the cabling of the XtremIO
iSCSI ports to the server ports. There are two iSCSI targets per node.
The system is supplied with 10 Gbs fiber optic iSCSI ports to match
the customer infrastructure. XtremIO iSCSI port locations are shown
in Figure 113 on page 174.
Connecting an iSCSI Windows host to an XtremIO array
173
Use Case Scenarios
Figure 113
XtremIO iSCSI port locations
This setup consists of a Windows host connected to an XtremIO array
as follows:
1. The Windows host is connected via two paths with 10G iSCSI and
IPv4.
2. The XtremIO array is connected via two paths for 10G iSCSI each.
3. PowerPath is installed on the host.
Note: To make PowerPath support XtremIO, install PowerPath using the
'EMCPower.X64.signed. 5.7.b223.exe /v"ADDLOCAL=XIO" command.
Configuring storage system iSCSI portal
To configure XtremIO storage iSCSI portal using GUI, complete the
following steps:
1. On the Main Menu, click Administration.
2. On the task bar, select iSCSI Network Configuration.
174
iSCSI SAN Topologies TechBook
Use Case Scenarios
Figure 114
iSCSI Network Configuration window
3. Next to the iSCSI Portals Table, click Add to add an iSCSI portal.
The Edit X1-N1-iscsi1 iSCSI Portal dialog box displays.
Figure 115
Edit X1-N1-iscsi1 iSCSI Portal dialog box
Complete the following fields:
a. Target Port: Select a port from the drop-down menu.
b. IP Address/Subnet bits: Enter the portals IP address and
Subnet bits.
Connecting an iSCSI Windows host to an XtremIO array
175
Use Case Scenarios
c. Click OK.
4. (Optional) If there is any router between the hosts and storage,
you can add an iSCSI routes table. To add this table, click Add
next to the iSCSI Routes Table.
The Add iSCSI Route dialog box displays. Fill out the following
fields:
a. Route Name: Define a name for the route.
b. Destination Subnet/Subnet bits: Enter the destination subnet.
c. Gateway IP: Enter the gateway IP address
d. Click OK.
5. Repeat Step 1 through Step 4 to add another iSCSI portal,
192.168.2.1.
Assigning an IP address to each NIC or iSCSI HBA in a Windows Server 2008
To assign an IP address to each NIC or iSCSI HBA in a Windows
Server 2008 that will be connected to the storage system, complete the
following steps.
1. Click Start > Control Panel > Network and Sharing Center >
Manage Network Connections.
The Network Connections window displays.
Figure 116
Control Panel, Network Connections window
2. Locate 10 GbE interfaces in the Network Connections dialog box.
176
iSCSI SAN Topologies TechBook
Use Case Scenarios
3. Identify the NIC or iSCSI HBA which you want to set the IP
address in the dialog (QLogic 10 Gb PCI Ethernet Adapter) and
right-click the NIC or iSCSI HBA.
The Local Area Connection Properties dialog box displays.
Figure 117
Local Area Connection Properties dialog box
4. Select the Internet Protocol Version 4 (TCP/IPv4) entry in the list
and then click Properties.
The Internet Protocol Version 4 (TCP/IPv4) Properties dialog
box displays.
Connecting an iSCSI Windows host to an XtremIO array
177
Use Case Scenarios
Figure 118
Internet Protocol Version 4 (TCP/IPv4) Properties dialog box
5. In the General tab, select Use the following IP address and enter
the appropriate IP address and subnet mask of the adapter in the
IP address and Subnet mask fields.
6. Click OK and the close all open dialog boxes.
7. Repeat these steps for any other iSCSI adapters in the host.
Configuring iSCSI initiator on a Windows host
Before an iSCSI initiator can send data to or receive data from the
storage system, you must configure the network parameters for the
NIC or HBA iSCSI initiators to connect with the storage-system iSCSI
targets.
You may need to install the Microsoft iSCSI Initiator software. This
can be downloaded from http://www.microsoft.com.
178
iSCSI SAN Topologies TechBook
Use Case Scenarios
Note: Some operating systems, such as Microsoft Windows 2008 (used in this
example) have bundled the iSCSI initiator with the OS. As a result, it will not
need to be installed and can be accessed directly from Start > Administrative
Tools > iSCSI Initiator.
To configure iSCSI on Windows via Target Port Discovery, complete
the following steps:
1. Launch the Microsoft iSCSI Initiator GUI. The iSCSI Initiator
Properties window displays.
Figure 119
iSCSI Initiator Properties window
Connecting an iSCSI Windows host to an XtremIO array
179
Use Case Scenarios
2. Select the Discovery tab and click Discover Portal.
Figure 120
Discovery tab
The Discover Target Portal dialog box displays.
Figure 121
180
Discover Target Portal dialog box
iSCSI SAN Topologies TechBook
Use Case Scenarios
The targets under the discovered portal now display.
Figure 122
Targets display
Connecting an iSCSI Windows host to an XtremIO array
181
Use Case Scenarios
3. Select the Targets tab.
Figure 123
Targets tab
4. Select one Target and click Connect. Repeat for each Target.
The Connect to Target dialog box displays.
Figure 124
182
Connect to Target dialog box
iSCSI SAN Topologies TechBook
Use Case Scenarios
5. Select the Add this connection to the list of Favorite Targets and
Enable multi-path checkboxes.
6. Click OK. The host is connected to the targets, as shown in the
following figure:
Figure 125
Host connected to targets
Connecting an iSCSI Windows host to an XtremIO array
183
Use Case Scenarios
7. Repeat Step 2 through Step 6 to add the second iSCSI target.
Figure 126
Second iSCSI target
Configuring LUN masking on an XtremIO array
1. On the Main Menu, click Configuration.
Figure 127
184
Main menu
iSCSI SAN Topologies TechBook
Use Case Scenarios
2. Next to the Volumes, click Add.
The Add New Volumes screen displays.
Figure 128
Add New Volumes screen
3. In the Add New Volumes screen, define the following:
a. Name: The name of the volume.
b. Size: The amount of disk space available for this volume
c. Volume Type: Select one of the following types that define the
LB size and alignment-offset:
– Normal (512 LBs) (alignment-offset: 0)
– 4kB LBs (alignment-offset: 0)
– Legacy Windows (alignment-offset: 7)
4. (Optional) To put these LUNs into a folder, click Next.
Connecting an iSCSI Windows host to an XtremIO array
185
Use Case Scenarios
A New Folder dialog box displays.
Figure 129
New folder dialog box
d. Fill out the Folder Name: Windows in this example.
e. Click OK.
The folder shows in the Add New Volumes screen.
Figure 130
Add New Volumes screen
5. In the Add New Volumes screen, click Finish.
186
iSCSI SAN Topologies TechBook
Use Case Scenarios
The volumes are created and appear in the volumes list in the
Configuration window.
Figure 131
Configuration window
6. Next to the Initiator Groups tab, click Add.
An Add Initiator Group window displays.
Figure 132
Add Initiator Group window
7. In the Add New Initiator Group dialog, define the following:
a. Initiator Group Name: Enter a name for the group.
b. Initiator Name (optional): Add initiators to the group. A name
identifies the initiator in the GUI or CLI lists. A name is not
mandatory.
c. Port Address: Add an initiator's port address. For iSCSI
initiator, put IQN format. For example,
iqn.1991-05.com.microsoft:i2051085
Connecting an iSCSI Windows host to an XtremIO array
187
Use Case Scenarios
Note: If you have log in to iSCSI from the host iSCSI initiator, then
XtremIO can detect the IQN. Click Add, select the designated IQN from
Initiator Port Address, then click OK.
The Add Initiator dialog box displays.
Figure 133
Add Initiator dialog box
8. Complete the information and click OK.
9. (Optional) To put the initiator group into a folder, click Next.
A new folder is added.
10. In the Add Initiator Group window, click Finish.
The created initiator group displays.
Figure 134
Initiator Groups displayed
11. Select the LUNs and initiator group that you want to map
together, then click Map All.
188
iSCSI SAN Topologies TechBook
Use Case Scenarios
The LUN Mapping configuration displays.
Figure 135
LUN Mapping Configuration window
12. Click Apply.
This completes the LUN masking on XtremIO. For more details, refer
to the XtremIO User Guide, available on EMC Online Support at
https://support.emc.com.
Detecting the iSCSI LUNs from Windows host
Complete the following steps to make the LUNs available:
1. Launch the Microsoft iSCSI Initiator GUI.
Connecting an iSCSI Windows host to an XtremIO array
189
Use Case Scenarios
The iSCSI Initiator Properties window displays.
Figure 136
iSCSI Initiator Properties window
2. Select the Volumes and Devices tab.
3. Click Auto Configure to bind the volumes.
190
iSCSI SAN Topologies TechBook
Use Case Scenarios
4. Open PowerPath. The devices appear.
Figure 137
EMC PowerPath Console
5. Go to Windows Disk Management. The devices display.
Connecting an iSCSI Windows host to an XtremIO array
191
Use Case Scenarios
192
iSCSI SAN Topologies TechBook