Download ITC2014 Ethernet packet filtering for FTI – part II

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

CAN bus wikipedia , lookup

Point-to-Point Protocol over Ethernet wikipedia , lookup

Power over Ethernet wikipedia , lookup

Internet protocol suite wikipedia , lookup

IEEE 802.1aq wikipedia , lookup

Distributed firewall wikipedia , lookup

Lag wikipedia , lookup

Multiprotocol Label Switching wikipedia , lookup

Computer network wikipedia , lookup

Nonblocking minimal spanning switch wikipedia , lookup

Asynchronous Transfer Mode wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Airborne Networking wikipedia , lookup

RapidIO wikipedia , lookup

Wake-on-LAN wikipedia , lookup

Network tap wikipedia , lookup

Net bias wikipedia , lookup

IEEE 1355 wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Virtual LAN wikipedia , lookup

Deep packet inspection wikipedia , lookup

UniPro protocol stack wikipedia , lookup

Quality of service wikipedia , lookup

Transcript
Ethernet Packet Filtering – Part 2
Øyvind Holmeide
10/28/2014
by
FTI network challenges
This presentation targets two FTI network challenges:
1. How to guarantee worst-case latency for latency sensitive data?
2. How to achieve near wire speed network performance on recorder
ports without packet loss?
FTI network challenge
1. How to guarantee worst-case latency for latency sensitive data?
– Answer: Quality of Service (QoS)
QoS
Quality of Service (QoS) in Ethernet
• QoS is relevant for:
– Real time critical data
– Latency sensitive data
– Loss critical data
• QoS can be configured on Ethernet switches based on:
– Layer 1: Port based priority
– Layer 2: VLAN tagging (IEEE802.1p)
– Layer 3: IP ToS/CoS
QoS
Standard Ethernet switches provide several QoS properties:
• Scalable bandwidth
• Full duplex connectivity (no collision)
• Flow control off – deterministic access (send when you want)
• Several queues per port:
– high priority packets in high queues
– Low priority packets in low queues
• Strict/fixed priority scheduling (QoS policy) gives best QoS
properties for high priority data
QoS
QoS: Traffic prioritization:
QoS
Layer 1 – Priority classification based on: Port based priority:
• All incoming packets will be given the same priority
• The associated packet priority is lost for the next switch in the
network chain, unless:
• A VLAN tag including the packet priority is kept on the switch egress
port
7. Application
6. Presentation
5. Session
4. Transport
3. Network
2. Data link
1. Physical
QoS
Layer 2 - Priority classification based on: IEEE802.1p
Ethernet layer MAC header (layer 2) without 802.1p
Destination
Source
type
FCS
Ethernet MAC header (layer 2) with 802.1p
Destination
7. Application
Source
0x8100
tag
XXX X
type
0xXXXX
6. Presentation
5. Session
4. Transport
3. Network
2. Data link
1. Physical
12-bit 802.1Q VLAN Identifier
Canonical - 1 bit
Tagged frame Type Interpretation 3-bit Priority Field (802.1p)
Tagged frame Type Interpretation - 16 bit
FCS
QoS
Layer 3 - Priority classification based on: IP ToS/CoS
IPv4 header (layer 3)
MAC
Version
IHL
IP
Type of service
Total lenght
D M
F F
Identification
Time to live
Header checksum
7. Application
6. Presentation
Source
IP address
Source
address
5. Session
4. Transport
Destination IP address
3. Network
2. Data link
1. Physical
Fragment offset
Options
QoS
How to guarantee worst-case latency for latency sensitive data?
• High priority packets = latency sensitive data
• Such data will never be lost in the switch due to congestion and worst case
latency for such packets can be calculated if:
– The total amount of high priority traffic never exceed the bandwidths of the drop
links
– Worst case latency for high priority packets can be calculated if the characteristics
of the high priority packets are known.
Example ...
QoS
Example:
• 100 Mbps with full duplex connectivity is used on all data source drop links and
1Gps is used on switch trunk ports
• The switch is a store-and-forward switch with a minimum switch latency of 10μs.
• The switch uses strict priority scheduling
• The latency sensitive packet has a length of 200 bytes including preamble, MAC,
IP, UDP, payload, FCS and minimum IPG. The latency sensitive packets are
treated as high priority packets, all other packets have less priority
• Up to five other end nodes may generate similar latency sensitive packets of 200
bytes that may be in the same priority queue before the packet enters the queue,
and causes extra switch delay
• All latency sensitive packets are generated in a cyclic manner
Cont’d ..
QoS
Example cont’d:
• The worst case switch latency of a latency sensitive packet will then
be:
1.
2.
3.
4.
16μs, store-and-forward delay.
10μs, minimum switch latency.
12μs, worst-case latency due to flushing of a packet with maximum packet
length.
8μs, five latency sensitive packets already in the same priority queue.
46μs, worst case - total
FTI network challenge
2. How to achieve near wire speed network performance on recorder
ports without packet loss?
Answer: Traffic shaping and/or port trunking
Network throughput
Half duplex network:
Once a network segment utilization reaches 40% the throughput
decreases substantially
From Pocketbook_ethernet_switching.pdf
Network throughput
Full duplex network:
- Collisions on the wire is no longer an issue, but
- If no smart network engineering is performed and the peak load
for some time period exceeds the bandwidth of a given drop link,
then there is a risk for dropping packets in the Ethernet switch as
a result of such network congestion.
1
2
2
3
4
2
1
4
4
4
3
4
4
1
4
3
4
2
4
1
From Pocketbook_ethernet_switching.pdf
Switch
Fabric
1
2
3
4
Network throughput
Dropped packets due to network congestion depend on:
- Switch buffer capacity
- Multicast filters + Head-of-Line blocking prevention
- QoS
- Peak load
Note: average network load is an unsuitable parameter for
estimating probability for dropped packets due to network
congestion. Average bandwidth utilization of 20-30% may still
result in dropped packets if the switch load is bursty
Traffic shaping
• Without shaping
Cluster 1
Bottle-neck
Cluster 2
Recorder
Cluster 6
Average load from each cluster < 150Mbps,
but data is bursty
Traffic shaping
• With shaping
Cluster 1
No packet loss anymore
Cluster 2
Recorder
Cluster 6
Traffic is shaped, peak load reduced from 1Gps to
150Mbps
Traffic shaping
Example:
A lab test setup based on the following components was established.
• 30 x IED simulators (Smartbit cards), where up to 30Mbps is generated
per IED simulator. Data is sent in bursts
• 6 x CM1600 cluster switches
• 1 x CM1600 central switch
• 1 x recorder simulator
Traffic shaping
Example cont’d:
Two tests were performed:
1. No rate shaping enabled on the cluster switches
Result: 4-5% packet loss
2. Rate shaping enabled on each of the cluster switches.
Rate shaping level set to 150Mbps
Result: No packet loss
Traffic shaping and port trunking
• With shaping and port trunking
Two or more ports are combined in a logical
trunk by using Link Aggregation Control Protocol
(LACP) according to IEEE 802.1ax
Cluster 1
Cluster 2
Cluster 6
Conclusion
• Worst case switch latency for latency sensitive data can be
guaranteed if QoS techniques are used. Worst case switch latency
less than 100µs is possible for a gigabit switch.
• Near wire speed network performance on recorder ports without
packet loss can be achieved by using traffic shaping techniques
• Combining two ports in a logical trunk according to IEEE 802.1ax
can theoretically double the bandwidth to the recorder