Download International Telecommunication Union

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Zero-configuration networking wikipedia , lookup

Distributed operating system wikipedia , lookup

IEEE 802.1aq wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Airborne Networking wikipedia , lookup

Network tap wikipedia , lookup

IEEE 1355 wikipedia , lookup

Transcript
Joint IEEE-SA and ITU Workshop on Ethernet
802.1AX-REV – Link Aggregation
Revision
Panagiotis Saltsidis,
Senior Specialist, Ericsson
Geneva, Switzerland, 13 July 2013
Link aggregation
Link Aggregation was originally standardized in 802.3ad-2000 (it
has been later incorporated in 802.3 as clause 43 of 802.3-2005).
Since 2000 there has been a growing demand that Link
Aggregation should not be specific to an individual MAC
technology, and that the maintenance work should be done in
802.1.
In 2008 Link Aggregation was removed from the 802.3-2008
revision and published as IEEE Std 802.1AX-2008.
A limitation of the current IEEE802.1AX is that all physical ports in
the link aggregation group must reside on the same logical switch
which in most scenarios will leave a single point of failure when
the physical switch to which both links are connected goes offline.
Proprietary solutions that address dual homed scenarios exist
Geneva, Switzerland, 2 13 July 2013
2
Original 802.1AX Goals
Increased bandwidth
Linearly incremental bandwidth
Increased availability
Load sharing
Automatic configuration
Rapid configuration and reconfiguration
Deterministic behavior
Low risk of duplication or misordering
Support of existing MAC Clients
Backwards compatibility with aggregation-unaware devices
Accommodation of differing capabilities and constraints
No change to frame formats
Network management support
Dissimilar MACs
Geneva, Switzerland,13 July 2013
3
Link Aggregation Sublayer
Aggregator Client
Internal Sublayer Service Interface
Agg: M_UNITDATA ( )
Frame Collection
Link Aggregation
Control
shared state variables
Link Aggregation
Control Protocol
Marker
Responder
Frame Distribution
Frame
Collector
MAC
Client
F rames
Marker
Frames
Internal Sublayer
Service Interface(s)
Marker Generator/ Frame
Receiver (optional) Distributor
Marker
F rames
MAC
Client
F rames
AggMuxN:
M_UNITDATA ( )
Aggregator
Parser/Multiplexer
Aggregator
Parser/Multiplexer
Aggregation Control
Internal Sublayer Service Interface(s)
CtrlMuxN: M_UNITDATA ( ) and
DataMuxN: M_UNITDATA()
Link Aggregation
sublayer
• ••
Aggregator
Parser/Multiplexer
Aggregator
Control
frames
Aggregator Client and
Marker frames
DataMux
CtrlMux
CtrlMux
DataMux
Control
Parser/Multiplexer
Control
Parser/Multiplexer
MAC
MAC
• ••
Control
Parser/Multiplexer
Internal Sublayer Service Interfaces
MacN: M_UNITDATA ( )
Geneva, Switzerland,13 July 2013
MAC
4
IEEE802.1AX-REV - DRNI
Link Aggregation—DRNI provides benefits of Link Aggregation
Portals—Connections between two cooperating sets of Systems (two
Portals) are supported, in contrast to Link Aggregation, so that
connectivity between two networks can be maintained despite the failure
of an entire System (and its connected links) belonging to a Portal.
Compatibility—A multi-System Portal can connect to a single-System
Portal or to an Aggregation System compliant with Clause 6 or with
previous versions of this Standard.
Administrative isolation—A DRNI Link Aggregation Group can connect
Portals in networks that are under separate administration, running
different fault recovery protocols.
Administrative independence—The specification of DRNI to
interconnect separate networks does not introduce new requirements on
either networks’ existing control protocols.
Geneva, Switzerland,13 July 2013
5
IEEE802.1AX-REV - DRNI
Inter-network fault isolation—The failure or recovery of a link or node
in one network, requiring a reaction by that network’s control protocols,
can be hidden by DRNI from the second network’s control protocols. Thus,
super-networks can be created out of separate networks interconnected
via DRNI, without propagating one network’s fault and recovery events
throughout the super-network.
Network-DRNI fault isolation—The failure or recovery of a link between
two Portals can be hidden by DRNI from both networks’ control protocols.
Rapid fault recovery—Means for the Systems in a Portal to communicate
are provided so that they can cooperate to respond rapidly to failure or
recovery events, typically on the order of milliseconds for link down events
and 1 second or less for link up events.
Extended faults—Optional elements of DRNI can support three Systems
in a Portal, so that fault redundancy can be provided even while a System
is added or removed from a Portal.
Distribution independence—The frame distribution algorithm used to
satisfy network requirements can be different from the algorithm used to
assign frames to the Aggregation Ports of a Link Aggregation Group.
Geneva, Switzerland,13 July 2013
6
DRNI
DRNI is created by using a Distributed Relay to
interconnect two or three Systems, each running
Link Aggregation, to create a Portal.
Each System in the Portal (i.e., each Portal
System) runs Link Aggregation with a single
Aggregator.
The Distributed Relay enables the Portal Systems
to jointly terminate a Link Aggregation Group.
To all other Systems to which the Portal is
connected, the Link Aggregation Group appears
to terminate in a separate emulated System
created by the Portal Systems
Geneva, Switzerland,13 July 2013
7
Systems in a Portal
System A
System B
Function 1
port
Link Aggregation
MAC p
Function 1
port
port
MAC q
possible
other
network
links
Link Aggregation
MAC r
possible network link
MAC s
port
MAC t
possible
other
network
links
Systems A and B each is characterized by performing a “Function
1,” which is presumably some kind of packet relay function, e.g.,
a router or a bridge.
“Function 1” could just as well be a file server operation, in which
case the outside two “ports” on each System would likely not be
present.
Each system runs a single instance of a Link Aggregation
sublayer.
Geneva, Switzerland,13 July 2013
8
Distributed Relay
port
Portal System A
Portal System B
Function 1
Function 1
MAC
port
port
MAC
possible network link
possible
other
network
links
port
possible
other
network
links
Emulated System C
Distributed Relay
Gateway
MAC
Link Aggregation
MAC p
MAC q
MAC r
MAC s
MAC
Gateway
MAC t
There appears to exist a third emulated System C, connected to the original Portal
Systems by a link that has been inserted between Function 1 and Link Aggregation.
Portal Systems A and B conspire to behave, insofar as any other Systems to which they are connected
can discern, as if emulated System C actually exists
The Distribute Relay supports:
The necessary protocols and procedures for up to three Portal Systems.
Link Aggregation functions, each subsuming one or more MACs
Connections among the Portal Systems of the Distributed Relay.
The Distributed Relay in the emulated System C is an (N+1)-port relay for N Portal Systems, with N
Gateway Ports connected to the Portal Systems, and a single Emulated Link Aggregation sublayer
associated with the original Portal Systems.
The Aggregation Ports (MACs) have been moved to the emulated System, and thus appear, to all other
Systems, to be equally distant from the real Portal Systems comprising the Distributed Relay.
Geneva, Switzerland,13 July 2013
9
View from Systems A and B
Portal System A
Portal System B
Function 1
port
possible
other
network
links
Function 1
MAC
port
port
MAC
possible network link
Gateway
Gateway
port
possible
other
network
links
Emulated System C
DR Function
MAC
Link Aggregation
MAC p
MAC q
DR Function
IPP
IPP
IPL
Link Aggregation
MAC r
MAC s
MAC
MAC t
In each System A and B, the ports that are to be associated with System C are moved to a position
below the DR Function’s Link Aggregation sublayer.
A virtual link and its terminating virtual MACs, called a “Gateway”, is constructed to connect each
DR Function to its Function 1.
Between each pair of DR Functions in the Portal there is constructed an Intra-Portal Link (IPL),
terminated at each end by an Intra-Portal Port (IPP).
There is a “Gateway algorithm” that decides through which Gateway a frame can pass into or out of
the emulated Distributed Relay.
Similarly, a “Port algorithm” decides through which Portal System’s Aggregation Ports a frame can
pass into or out of the emulated Distributed Relay.
The DR Functions, work together to move frames between the Gateways, the IPL, and the Link
Aggregation sublayers.
Geneva, Switzerland,13 July 2013
10
Not an example of a
DRNI Portal
System D = A + B
Distributed Forwarding and/or Upper Layers
Link Aggregation
port
MAC
possible
other
links
MAC
port
MAC
Link Aggregation Group
possible
other
links …
IEEE802.1AX-REV will not define the alternate model shown
in Figure above, in which the entirety of Systems A and B
simulate a single System D, but neither will it prevent the
use of DRNI in such a model
Geneva, Switzerland,13 July 2013
11
Portal Systems Topology
The mechanisms specified in IEEE802.1AX-REV
only support certain topologies of Portal Systems
interconnected by IPLs.
The trivial case of a single-System Portal, which of course has no IPL.
A Portal with two Systems, connected by an IPL.
A ring of three Systems, connected by three IPLs.
Three Portal Systems in a linear topology with two IPLs. Note that this topology
may arise by design or by failure of an IPL in a ring of three Portal Systems.
Geneva, Switzerland,13 July 2013
12
Intra-Portal Link
An Intra-Portal Link can be physical (e.g., an 802.3 Ethernet LAN)
or logical (e.g., a PBB tunnel or an IETF pseudowire). DRNI
supports a number of methods by which the systems can
distinguish frames on a network link from frames on a particular
Intra-Portal Link:
Physical. A separate physical link can be used for any particular network link or Intra-Portal Link.
Aggregated. A separate Aggregation can be used for an IPL.
Time-shared. A network link and one or more IPLs can use the same physical link (or Aggregator Port),
but at different times. This requires that the Systems disable the use of the network link when the IPL is
required for connectivity, or else that the use of the Aggregation Links and the selection of Gateways be
adjusted to eliminate the need for using the IPL when the network link is required..
Tag-shared. If Per-service frame distribution is employed, and if the number of services required to
support the network link, plus the number of services required to support one or more Intra-Portal Links, is
less that the number of services supplied by the frame format used (e.g., 4094 S-VLAN IDs), then VID
translation can be used to separate the frames on the different logical links.
Encapsulated. The frames on the network link and the IPL(s) can be encapsulated.
A system implementing the DRNI shall support using separate
physical links for IPLs and network links, and may support any of
the other methods.
Geneva, Switzerland,13 July 2013
13
Distributed Relay
Control Protocol
The purpose of the Distributed Relay Control Protocol
(DRCP) is to:
Establish communication between Portal Systems across an IntraPortal Link;
Verify the consistent configuration of Portal Systems;
Determine the identity to be used for the emulated System;
Distribute the current states of the Portal Systems and their
Aggregation Ports among each other;
Compute the resultant path of any frame required to pass through
each IPL, and exchange the information with adjacent Portal Systems
as required to ensure against forwarding loops and duplicate frame
delivery.
Geneva, Switzerland,13 July 2013
14
Establishing the Portal
and Distributed Relay
802.1AX-REV will not specify the creation of Portals automatically.
DRCP compares the network administrator’s intentions, as defined by managed
objects, to the physical topology of the configured Systems, and if the connected
Systems’ configurations are compatible, DRCP establishes and enables the
operation of the Portal.
In order to establish a Distributed Relay across a Portal, a network
administrator will configure the following managed objects
Which systems are to participate in a Portal.
Which point-to-point are to be assigned as Intra-Portal Links.
Which Aggregation in each Portal System is to be assigned to this DR Function
The methods to be used by the DR Functions to assign frames to Gateway
Conversation IDs and Port Conversation IDs
The prioritized list of assignments of Conversation IDs to Gateways and
Aggregation Ports to cover failure modes
Geneva, Switzerland,13 July 2013
15
DRCPDU contents
A DRCPDU contains the following information:
The Actor System ID of the Aggregator attached to the DR
The Portal ID (which is also the emulated System’s Actor System ID)
The Portal System Number within the Portal of this DR Function
The administrative Aggregator Key value of the Aggregator attached to the DR Function
The Port algorithm used by this DR Function and Aggregator
The Gateway algorithm used by this DR Function
A digest of this DR Function’s prioritized Port Conversation ID-to-Aggregation Port
assignments
A digest of this DR Function’s prioritized Gateway Conversation ID-to-Gateway assignments
A Boolean indicating whether the DR Function’s Gateway is operational.
A list of the Port IDs of all this DR Function’s Aggregator’s Aggregation Ports.
The state of the immediate Neighbor Portal System DR Function on this IPP and on the other
IPP, if any (and only after its ability to form a Portal with the transmitting DR Function has
been verified), including their:
Portal ID;
Portal System Number;
Boolean Gateway operational flag; and
List of operational Port IDs.
The speed of this DR Function’s Aggregation Ports
Geneva, Switzerland,13 July 2013
16
Current Status –
Schedule
The 6th Task Group Ballot will be discussed during
this meeting
The aim is have new drafts and associated ballots
per IEEE802.1 meeting
The goal is to have an approved and published
standard during the first half of 2014
Geneva, Switzerland,13 July 2013
17