Download Gigabit Fiber Ethernets (Optical Ethernet)

yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts

Power over Ethernet wikipedia, lookup

Cracking of wireless networks wikipedia, lookup

Distributed firewall wikipedia, lookup

Network tap wikipedia, lookup

Airborne Networking wikipedia, lookup

Recursive InterNetwork Architecture (RINA) wikipedia, lookup

Zero-configuration networking wikipedia, lookup

Computer network wikipedia, lookup

Deep packet inspection wikipedia, lookup

IEEE 1355 wikipedia, lookup

Wake-on-LAN wikipedia, lookup

Internet protocol suite wikipedia, lookup

AppleTalk wikipedia, lookup

Asynchronous Transfer Mode wikipedia, lookup

Point-to-Point Protocol over Ethernet wikipedia, lookup

CAN bus wikipedia, lookup

Spanning Tree Protocol wikipedia, lookup

IEEE 802.1aq wikipedia, lookup

Multiprotocol Label Switching wikipedia, lookup

Passive optical network wikipedia, lookup

Peering wikipedia, lookup

Virtual LAN wikipedia, lookup

Quality of service wikipedia, lookup

Net bias wikipedia, lookup

Registered jack wikipedia, lookup

Technology Task Group 5
Gigabit Fiber Ethernets (Optical Ethernet)
1. Introduction
This report provides an analysis of the Gigabit Ethernet technologies (1 Gb and 10 Gb
Ethernet), which we will refer to as “Optical Ethernet.” We discuss Optical Ethernet
network architectures and capabilities, emerging EtherLEC carrier activity, and an
overview of related (competitive and synergistic) technologies. Optical Ethernet
technology may produce profound changes in network architectures and underlying
network technologies, and it has the potential to make significant inroads into the
MAN/WAN environments. We therefore examine the Optical Ethernet technologies from
the viewpoint of trying to answer the underlying question, “Is Optical Ethernet a viable
MAN/WAN technology and will it play a significant role in future network architecture
2. Ethernet Background
Figure 1 shows the Ethernet protocol stack and its relation to the OSI Model. In OSI
terms, Ethernet provides layer 2 (Data Link) and layer 1 (Physical) functionality. The
parts of the protocol stack that are specific to Ethernet are shaded (the MAC, Physical
Signaling, and Media). The Logical Link Control (LLC) sub-layer defined in IEEE 802.2
Logical Link Control
Layer 7
Layer 6
MAC Bridging
Layer 5
Layer 4
Media Access Control
Layer 3
Layer 2
Data Link
Layer 1
Physical Signaling
OSI Model
Major IEEE Sublayers
1518 Bytes  Length  64 Bytes
64 bits
48 bits
48 bits
16 bits
46 to 1500 Bytes
32 bits
Ethernet Frame Structure
Figure 1 – Ethernet Protocol Structure
and MAC Bridging defined in IEEE802.1 are independent of any specific LAN
technology and sit on top of the Ethernet specific stack.
The LLC performs error control, broadcasting, multiplexing, and flow control functions.
We do not go into the details of the LLC here since they are not important to
understanding Optical Ethernet capabilities vis-à-vis it being a MAN/WAN technology.
The IEEE 802.1D protocol is an important protocol we discuss. This standard defines the
MAC Bridging capability. Important supplements to 802.1D are also discussed like
802.1Q and 802.1p, which cover VLANs and priority capability for 802 LANs.
The other protocol specification we discuss is the Ethernet standard IEEE 802.3. The
current version (2000 edition) includes all of the major supplements relevant to GbE
(e.g., 802.3z). The supplement for 10GbE, 802.3ae, is in the final stages of the approval
process and should be approved in the July 2002 timeframe.
At the bottom of Figure 1 is shown the Ethernet frame structure, which has been
maintained through all of the Ethernet enhancements (802.1Q adds some additional fields
that will be discussed later). The maximum length frame is 1518 bytes, and the Data/LLC
payload is required to be in the range of 46 to 1500 bytes. The minimum 46 bytes is
required for the CSMA/CD capabilities, which is not relevant for Optical Ethernet
networks because simplex GbE links have never and will never be implemented
(although the standard allows it), and the 10 GbE standard does not include CSMA/CD.
As speeds have increased there have been pressures to increase the maximum frame size,
but to date this has not happened and it is not likely to happen.
Figure 2 illustrates the evolution of Ethernet from half-duplex, shared media operation, to
Figure 2 – Evolution of Ethernet Architecture
full duplex Ethernet bridge/switch architectures, which are collision free. The original
Ethernet (1985) is shown on the top, which is a bus topology and a shared coaxial cable
medium. The coaxial bus is a collision domain that needs CSMA/CD to resolve
collisions. The speed of the first Ethernet was 10 Mbps and used thick coaxial cable; this
system is designated type 10BASE5 (10 refers to the speed 10 Mbps, BASE refers to
baseband medium and 5 refers to a 500 meter maximum segment length). A thin coaxial
cable was used later, and that system is designated type 10BASE2.
The next step in topology evolution (1990) was to use a hub (star) topology using
Ethernet repeaters at the hub. Along with this development came the use of Unshielded
Twisted Pair (UTP) cable for the medium, and the system is designated type 10BASE-T
(it was still 10 Mbps baseband and T refers to twisted pair). This configuration had the
advantage that UTP is widely used in office buildings, and a fault in any cable would not
take the whole LAN down like a coax cable cut would in the bus topology. The repeater
is a layer 1 device that would simply take bits off the incoming port and pipeline them to
all outgoing ports except the port the bits were coming from. Thus, even if the UTP pairs
were set up so there was a pair for each direction of transmission (duplex), the repeater
would itself continue to be a collision domain. Thus CSMA/CD was still required.
The next step was to run full-duplex UTP links between the host and the hub, and at the
hub is a LAN bridge. The bridge is a frame store-and-forward device, so it would buffer
incoming frames and wait until the outgoing ports were free to transmit them. Thus, with
this development it was possible to avoid collisions and CSMA/CD would not be
required. This was first implemented at 10 Mbps (1992) and a year later 100 Mbps
Ethernet (100BASE-T) was available. Note that an Ethernet switch is an Ethernet bridge
with more than two ports.
It is this full-duplex switched/bridged Ethernet topology that is the critical development
that makes Ethernet capable of evolving from a LAN technology to an enterprise and
carrier networking technology. First of all, with the elimination of the collision domain in
the hub node (i.e., the switch/bridge), the speed of the switch-to-switch trunks become
independent of the NIC speeds on the LAN hosts. Secondly, the use of full-duplex trunks
between switches allows efficient bandwidth usage and eliminates CSMA/CD length
limitations on trunks between Ethernet bridge/switches. The standard for GbE allows for
half-duplex links, but they have never been implemented. 10GbE does not allow halfduplex links. Therefore, for all practical aspects, GbE and 10GbE networks can be
considered as collision-free topologies that are based on Ethernet bridging and switching.
There is no CSMA/CD in these networks. There may be LANs terminating on an
Ethernet switch that is a collision domain, and CSMA/CD would be used on those switch
ports to interact with the LAN. But switch-to-switch GbE links are full duplex, and there
is no CSMA/CD in the switch-to-switch trunking part of the network.
3. Optical Ethernet Capabilities
In this section we summarize the key capabilities that Optical Ethernet technologies have
that give them the capability to be seriously considered for application beyond the LAN
and enterprise domains. The key capabilities are:
Full duplex point-to-point links with long reach to 40 –70 km (~ 110 km with
Significant port cost advantage over SONET and ATM (~ 8:1)
Layer 2/3 Switching
VLAN capability (802.1Q)
Spanning tree routing (802.1D)
Aggregate link capability (802.3ad)
Priority capability (aggregate flow QoS) provided by 802.1p
Policy-based QoS and traffic policing/shaping
Long Reach Links – The GbE standard sets the minimum distance requirement for long
reach links at 5 km. If that were all that was achieved, then Optical Ethernet in the MAN
would not be a subject worthy of discussion. In fact, proprietary implementations of GbE
optics have achieved reaches in the range of 40 to 70 km, and with extenders the reach
can be as high as 110 km. With these distance achievements GbE has the capability to
connect most, if not all, offices in a metropolitan area without requiring intermediate
repeaters or amplification. The standard for 10 GbE calls for a minimum distance of 40
km for the long reach optics, and that distance will almost surely be exceeded by
proprietary implementations. Again, this puts the distance capabilities of this technology
within the domain of metropolitan networking. For long haul (WAN) applications,
Optical Ethernet will most likely be put on SONET/WDM or Digital Wrapper (G.709)
over WDM. The 10 GbE standard has defined a WAN Physical Layer (WAN PHY) that
has a payload that matches the OC-192/STM-64 payload (~ 9.6 Gbps) and provides the
same overhead bytes with a stripped down implementation to reduce costs. As a result,
this 10 Gb PHY is directly compatible with SONET/SDH long-haul transmission.
Cost Advantage – Various studies have been done related to the comparison between
Ethernet to SONET port costs. (e.g., Dell’Oro Group 2000 report, RHK 2002 report), and
they consistently show that there is a significant advantage to Ethernet costs. The
comparison using the cost per gigabit of bandwidth for comparable line rates shows about
an 8 to one advantage for Ethernet technology. This number changes some from study to
study, but it always comes out a significant integer multiple. Studies of the costs Ethernet
carriers charge for services compared with traditional TDM private line costs show
Ethernet services costing about 30 -40% of the equivalent SONET $/bps. It is clear that
Ethernet (10/100/1000 Mbps) currently has a cost advantage that stems in large part from
the volumes in production brought about by the use of this technology in the enterprise
market. 10 GbE costs are not showing a significant cost advantage as yet, but this
technology is very new and it has not taken off in the enterprise market (that may take 23 years to get significant volumes).
Layer 2/3 Switching – The Ethernet switching technology being developed has
advanced well beyond simple layer 2 Ethernet bridging functions. The switching
capabilities provide both layer 2 and layer 3 (i.e., IP routing) capabilities. In addition,
most Ethernet switches also support MPLS capability. These systems do not have the
layer 3 and MPLS routing capacity that a core IP router has, but they have sufficient
capability to provide the needed functionality in a metro network environment. It is
important to have layer 3 and MPLS capability, because as is well known layer 2
networks do not scale. Routing was invented because layer 2 networks do not scale. As a
result, when switched Gigabit Ethernet networking is discussed, it should be considered
to be a layer 2/3 switched network and not a flat layer 2 network. An important aspect of
switched Gigabit Ethernet network design is where to use layer 2 switching and where to
layer 3 routing. It should also be noted that these layer 2/3 switches have non-blocking
switching fabrics and provide the same types of QoS related queuing, scheduling, etc.
that IP routers employ.
‘Plug-and-Play’ – Ethernet has been designed with the intent to make it as much as
possible ‘plug-and-play.’ Each Ethernet frame has both a source and destination MAC
address. The source address is used by the Ethernet Bridges/Switches to identify what
hosts are connected to which ports. This means that when hosts are moved from one LAN
to another, there is no provisioning required to reconfigure the network. Also, MAC
addresses are burnt into the Ethernet line cards, so there is no provisioning necessary
there either. The ‘plug-and-play’ capabilities of Ethernet provide some important
operations benefits, and this makes these networks simpler to operate in some respects.
There are some scalability issues with managing Ethernet networks that we discuss in the
next section; so ‘plug-and-play’ does not solve everything, but it is an important
VLAN Capability – The virtual LAN (VLAN) capability provided by 802.1Q is an
important capability for enterprise networks, and thus it is an important capability for
MAN carriers to support in providing transparent LAN services. The VLAN capability is
implemented with the VLAN tag, shown below, which provides a 12 bit VLAN ID as
well as a 3 bit User Priority field. VLANs are important to enterprise networks because it
allows them to segment their layer 2 network into subgroups, which makes the networks
more scalable (limits broadcast domains). VLANs also are useful in segregating traffic in
the network and thus allowing more efficient load distribution. With VLANS it is
possible to make non-physical groupings of nodes that share the same resources (e.g.,
64 bits
48 bits
48 bits
16 bits
User Priority (3)
16 bits
16 bits
46 to 1500 Bytes
32 bits
VLAN ID (12)
work groups geographically dispersed), and as people move around it is painless to keep
their membership in the same VLAN. Carriers also can utilize VLANs to help distribute
traffic more efficiently on their layer 2 networks, but they need to do so using a different
VLAN address space than that used by the enterprise. Currently this is done by
proprietary means, like stacking VLAN tags.
Spanning Tree Routing – Spanning tree routing is specified in 802.1D. Spanning tree
routing does for layer 2 networks what Interior Gateway Protocols (IGP) like OSPF do
for IP networks. Specifically, the Spanning Tree Protocol (STP) provides the capability
for layer 2 switches to communicate with one another using control messages so they can
negotiate among them selves to establish a spanning tree (which defines how the layer 2
routing is done), and when there is a link or node failure the nodes again communicate
with one another to reconfigure the network to a new spanning tree (as long as the
network is not disconnected a complete new spanning tree will be generated).
Aggregate Link Capability – Link aggregation is an important capability in 802.3. It
provides the ability to have a pool of links between two switches that appear to the
spanning tree algorithm as one logical link. Link aggregation also allows redundant,
diverse link capacity to be provided and a fast failover (< 1 second) to occur when a link
fails. With this capability, GbE networks can achieve relatively fast failover for fiber
cuts. Various load-sharing techniques are used with priorities for different Class of
Service (CoS) being indicated by the 3-bit user priority field in the VLAN tag. If
insufficient capacity is provided to handle all of the traffic under failure conditions, the
CoS indication can be used to allocate the available bandwidth by CoS.
CoS Priority Indication – As mentioned above, the VLAN tag has a 3-bit user priority
field that is used to identify class of service. Thus queuing disciplines for allocating
bandwidth and giving priority service can be done for traffic aggregates using the CoS
priority indication. This capability is consistent with the IP Differentiated Services
(DiffServ) Model, which also uses a 3-bit priority field. Thus, as packets go from layer 2
to layer 3, and vice versa, a consistent layer 2/3 priority handling of the packets can be
accomplished. It should be noted that all of the queuing and scheduling techniques that
are used at layer 3 (e.g., various forms of weighted fair queuing) have also been
implemented in the layer 2/3 Gigabit Ethernet switches.
Policy-Based QoS and Traffic Policing/Shaping – Proprietary capabilities have been
developed in vendor products that allow a carrier to allocate and enforce a customers
desired average bandwidth usage on a point-to-point basis. This allows a customer to
dynamically change their bandwidth allocations, and thus only pay for the bandwidth
they need. In addition, the carrier can mark frame in different CoS based on policies the
customer wants to use. Thus traffic is classified into different aggregate service classes
based on policy. As mentioned above, the Ethernet switches are capable of allocating
bandwidth and enforcing priorities based on the CoS indications.
4. Optical Ethernet Deficiencies
The deficiencies in Optical Ethernet are in four areas: Fault Protection/Restoration Times,
Providing QoS, Performance Monitoring and Fault Management, and Scalable OA&M
Capabilities. Each of these deficient areas is summarized below.
Fault Protection/Restoration Times – The standard for protection switching time in
SONET/SDH networks is 50 ms. By comparison, the protection/restoration times in
Optical Ethernet are on the order of one second or more. For most data applications it is
not necessary to have 50 ms protection, but the industry still seems to be biased towards
achieving close to that, and at worst being in the 100 to 200 ms range. Thus, some
significant reductions in protection/restoration times are needed in Optical Ethernet for it
to be considered “carrier grade.”
The fault recovery capability in Optical Ethernet comes from the spanning tree capability
and the aggregate link capability. The original spanning tree algorithm in 802.1D has a
convergence time of 30 to 50 seconds. Much of this time is caused by timers that ensure
all nodes have been updated before rerouting actions are taken. If routing loops form in a
layer 2 network, packets circulate and regenerate themselves and serious congestion and
deadlock problems occur. Thus, it is essential that there are never any routing loops in a
layer 2 network, and this is why the spanning tree convergence time is slow. An
improved spanning tree protocol (STP) is being developed in IEEE 802.1w. The basic
technique being used is to preplan most of the backup capabilities so that complete
recalculation of the spanning tree is not needed. However, in some cases the revised STP
will take on the order of 1 second to converge. Other techniques of routing and protection
using MPLS have been developed and claims are made that 50 ms protection is achieved.
This is probably true when a single pat fails and is recovered, but things generally take
much longer when many paths fail at the same time and all need to be recovered. Never
the less, it is quite likely that MPLS techniques will be able to achieve worst case
performance in to few 100 ms range.
The aggregate link failover capability is also on the order of one second. The failover
techniques are not IEEE standards, and thus improvement requires vendors to improve
their proprietary capabilities. (Need to look into what the fundamental limits are here –
I’m not sure why it takes 1 sec)
Providing QoS – Providing QoS in Optical Ethernet networks is in a similar state as IP
QoS using DiffServ. Over provisioning is the most widely used method to ensure delay
and jitter sensitive traffic get adequate performance. QoS is really based on CoS in which
traffic aggregates are managed and not individual flows. What this ultimately means is
that the network engineering must over provision capacity to account for the lack of
knowledge of where, specifically, bandwidth will be required and to account for there
being no admission control, except policing of aggregate traffic streams at the network
edge. As long as Ethernet technology has a significant cost advantage, it can afford to be
somewhat inefficient in its bandwidth usage in order to ensure QoS. However, as the cost
differences close, some increased efficiencies may be needed. However, another point of
view is that it is not clear that delay and jitter sensitive traffic need to be provided by
Optical Ethernet networks. It may be more reasonable to continue to provide those slow
growing services as they have been on SONET and ATM networks. In this way the
Optical Ethernet network would be used to provide delay and jitter tolerant data traffic,
which is likely to be a significant portion of the total load.
Another issue is that the layer 2 routing protocols (spanning tree) do not distribute traffic
well on the available capacity. Only links in the spanning tree are used to carry traffic.
The other links in the network only get used when there is a failure. There is work
underway in IEEE 802.1s to develop the multiple spanning tree capability. With multiple
spanning trees, traffic can be segmented into different groups, for example using VLAN
IDs, and different groups would use different spanning trees. This would allow all links
to be used to carry traffic under normal conditions and give better load balance. The load
balancing will be somewhat coarse, but much better than a single spanning tree.
Performance Monitoring and Fault Management – Ethernet provides no overhead for
performance monitoring, alarms, protection signaling, etc. as is done in SONET/SDH and
ATM. The 10 GbE WAN PHY does provide some capabilities in the SONET overhead,
but it is limited. Since it is not possible to directly measure bit error rates (BER), other
techniques need to be used to measure performance. One way this is done is to collect
data on frame CRC errors and dropped packet counts. This is a much slower process,
however, than directly measuring BER. Without alarm indications, sectionalizing and
doing root cause analysis is rather primitive in Ethernet networks. If networks are
relatively small, this is not a significant problem, but when networks get large, isolating
the cause of a problem can be a daunting task. Work is currently being done in the IEEE
Ethernet in the First Mile working group. They are working on proposals for providing
overhead for these performance monitoring and fault management needs.
Scalable OA&M Capabilities – Gigabit Ethernet technology grew out of the enterprise
network environment, and in that environment networks do not need to scale to very
large sizes in terms of components in the network, service requests, number of customer
connections, etc. As a result, many of the OA&M capabilities are not automated, and
there is a focus on SNMP as the management protocol. For a network to be ‘carrier
grade,’ it is necessary to be able to scale its operations capability to very large sizes. The
main areas that need to be addressed in Optical Ethernet OA&M scalability are providing
capabilities that enables secure single-ended maintenance, testing loopbacks at
customer/carrier demarcation points, flow-through provisioning, and integrated
Operations Support Systems (OSSs) that employ standard interfaces.
The IEEE Ethernet in the First Mile (EFM) study group is addressing some of the
maintenance issues. Specifically, they are looking at providing loopback capability to
remotely test whether a fault is in the carrier network or the customer’s network. This is a
single link capability, however, and significantly more work needs to be done to provide
maintenance and testing capabilities for end-to-end connections. ITU-T Study Group 13
is beginning to look into end-to-end OA&M, but this work is in the early stages.
The Metro Ethernet Forum has on their agenda Ethernet service definitions, technical
specifications, interoperability, and end-to-end service provisioning and delivery. They
will also be leveraging the EFM work in developing end-to-end OAM&P ‘standards.’
The need for flow-through provisioning and integrated OSS capability is extremely
important for large carriers. For small networks like the ELECs and enterprise networks
have, many things can be done manually, but when networks scale this becomes an
operations nightmare. There needs to be standard interfaces provided for OSSs so that
‘best of breed’ management software systems can be integrated into an overall OSS
structure that supports end-to-end management and a high degree of flow-through the
various support systems.
Another area where capabilities are needed is in capacity planning and management. This
needs to be done in conjunction with the QoS capabilities the network provides and the
Service Level Agreements that the carrier provides. Associated with this is the need for
data collection and analysis to understand the network performance and provide a basis to
mediation of disputes with customers regarding adherence to SLAs. As networks scale,
all of these functions need to be highly automated and part of an integrated OSS