* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Security Issues in Optical Networks - people.vcu.edu
Survey
Document related concepts
Recursive InterNetwork Architecture (RINA) wikipedia , lookup
Multiprotocol Label Switching wikipedia , lookup
TCP congestion control wikipedia , lookup
Asynchronous Transfer Mode wikipedia , lookup
Cracking of wireless networks wikipedia , lookup
Wake-on-LAN wikipedia , lookup
Transcript
Network and Communications Hongsik Choi Department of Computer Science Virginia Commonwealth University Peer to Peer Networks Large number of nodes, symmetric,no central database nor control How to find the one that I want? 5bit id example Chord System Name in ASCII IP to 160 bit number (N. ID) Ring of 2^160 node Store (name,My IP address) at successor(hash(name))) Find successor or prede. Linear? Finger table with m entries start k 2i (%2m ) Log n Key 3 14 16 at node 1 Congestion Control Link management Packet streams Buffer Outgoing Link • Admission (Drop Tail, Random Early Detection(RED)) • Scheduling FIFO • Time Division Multiplexing (TDM) and (FDM) • Rate proportional service Drop Tail FIFO Packet streams Buffer Outgoing Link • Statistical multiplexing: better average delay and loss probabilities (due to buffer overflow). Better utilization of resources -- Good • Work conserving: as long as there is a packet then the link works at full capacity • Simple: Good • Packet streams interfere with one another Guaranteed QoS is harder to achieve • Packets are lost in bursts -- not good for TCP Multiplexing Streams of packets flow 0 flow 1 flow 2 flow 3 . . Buffer Capacity C How do we split up bandwidth to different flows? TDM and FDM C r1 r2 r3 r4 r1 + r2 + r3 + r4 = C TDM: slot time and organize into frames each “channel” gets certain slots in a frame. FDM: bandwidth is split up using analog techniques. BW is reserved for each flow: if flow has no packets then bw is unused Processor Sharing: best of both worlds CPU burst and single CPU flow 1 flow 2 . . flow n C • Divide bandwidth of the link C equally among active flows. Active means there are packets present. • Each active flow gets C/A bandwidth, where A = # active flows In Network layer Virtual Circuit versus datagram 1. Admission control 2. Careful route discovery Packet queuing and service policy Packet discard policy Routing algorithm Packet life time management Which packet to drop? New or old Priority? Random Early Detection Virtual Clock Service • Implementing processor sharing or GPS is difficult because you have to split bandwidth dynamically • We’ll look at virtual clock service • It’s actually a packetized version of TDM • We’ll look at packetized GPS (PGPS) Generalized Processor Sharing (GPS) Each flow k is assigned a rate r(k). Usually, n i 1 r (i) C Bandwidth for flow k flow 1 flow 2 . . flow n r (k ) C iA r (i) C A = set of active flows Virtual Clock Service Each flow k has rate r(k) flow 1 flow 2 . . flow n Packets arrive into the link buffer: • a(i) = arrival time of the ith packet of flow k • L(i) = length of the packet • d(i) = virtual departure time of the packet = max{a(i), d(i-1)} + L(i)/r(k) C Virtual Clock Service flow 1 flow 2 . . flow n buffer C • Packets in the buffer are transmitted according to the smallest virtual departure time • What is going on? • The scheduler is emulating a TDM system The TDM system flow 1 flow 2 . . flow n r(1) r(2) r(n) Consider flow k: The ith packet of flow k departs at d(i) = the time the packet begins transmission + L(i)/r(k) = max{d(i-1), a(i)} + L(i)/r(k) = virtual departure time of virtual clock system Packetized GPS (Weighted Fair Queueing) arrival times and packet lengths flow 1 flow 2 . . flow n GPS emulator that determines virtual departure times (VDT) of packets VDTs of packets C buffer Packets are transmitted in order of their VDTs Performance For Virtual Clock Service and PGPS (or weighted fair queueing), the departure time of a packet is upper bounded by virtual departure time + Lmax/C Lmax = maximum packet size Weighted Round Robin: A Practical Scheduler flow 1 flow 2 . . flow n Buffers The buffers are served in round robin fashion BUT a buffer may be skipped under certain conditions Weighted Round Robin: A Practical Scheduler Credit r(k) flow k Buffer • Each time flow is considered, it gets additional credit r(k) • If a flow is considered and its HOL packet has length at most Credit then its packet is served and Credit = Credit - packet length Random Early Detection (RED) prob dropping Fixed buffer size for B packets Tail Drop Queueing Dropping can be bursty which can be bad for TCP 1 Occupancy Random Early Detection (RED) prob dropping Fixed buffer size for B packets RED Dropping some small fraction early tells TCP to back off 1 Occupancy Congestion control in virtual Circuit subnet Congestion happen -> send warnings to sources The warning bit – any router along the path can set Choke packet: first packet reduce the flow rate by certain percent and ignore for some interval, if another choke packet, reduce the rate with certain percent and so on At high speed over long distance, it does not work well Hop by hop choke Buffers or loss? Quality of Service Jitter- delay variations Buffering – reliability, bandwidth, delay Can this smoothing work in server side? Traffic Shaping tokens generated at rate r Leaky bucket flow control s Incoming traffic token bucket Buffer Require tokens to launch data (s,r) traffic R(t) Input 250k 500k 750k 500k w/ 10MB/sec bucket Token bucket allows burst! C = 250KB Burst length , S M = 25MB/sec Token bucket capacity, C bytes r 2MB/ sec Token arrival rate r bytes/sec S =250KB/(25MB/sec – 2MB/sec) Maximum output rate, M bytes/sec Out put burst = C+ rS bytes Number of bytes in S = MS C+ rS = MS S = C/(M- r) =10.8msec If all packets follow same route, we can reserve 1. Bandwidth, 2. buffer space, 3. CPU cycles If flow is well shaped and follows same route, router have to decide accept new flow or not based on Maximum packet size, minimum packet size, peak data rate,token bucket size, token rate What if there is one aggressive flow? Packet scheduling Fair queueing Weighted Fair queueing How to model? GPS How to implemented ? Virtual clock? Packetized GPS Protocol for streaming Multimedia (flow based algorithm) Multicast membership is dynamic Can resource reservation scheme works? RSVP(Resource reSerVation protocol): Multi source multi receiver case Flow based service Disadvantage: It is not scale well. Require advanced set up Marinating per flow information is too much Too much overhead in frequent router code change Class based? (differentiated services) expedited forwarding Assured Forwarding •4 class •low medium high Label Switching and MPLS Routing Switching Forward equivalence class Data driven setup with colored thread Internetworking Service How networks differ? Protocol Addressing Packet Size QoS Error handling Congestion control Security etc How network can be connected? Routers (multiprotocol routers) or switch How to provide Internetwork routing? Will be covered with IP Next class we will discuss IP A Network Calculus for Performance R(t): rate of traffic flow at time t • Simple model: constant rate r, R(t) = r • More complicated model: (s,r) traffic For all x < y, y x R(t )dt s r y x burstiness parameter (s,r) Traffic Simple FIFO queue and link R(t) B(t) C backlog B(t) first packet arriving s t (s,r) Traffic Simple FIFO queue and link R(t) B(t) C backlog B(t) first packet arriving s t For FIFO buffer with (s,r) traffic, Buffer occupancy is at most s + L Delay is at most (s + L)/C Rin(t) Dmax Rout(t) Bits may be arbitrarily delayed from 0 to Dmax y x y Rout (t )dt Rin (t )dt sin rin( y x D max) x D max sin rin ( D max) rin ( y x) output is burstier R1(t) R2(t) R3(t) Rout sout = s1 + s2 + s3 rout = r1 + r2 + r3 How do we get such traffic? tokens generated at rate r Leaky bucket flow control s Incoming traffic token bucket Buffer Require tokens to launch data (s,r) traffic R(t)