* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download 4.new-mj-hawaii
Zero-configuration networking wikipedia , lookup
Recursive InterNetwork Architecture (RINA) wikipedia , lookup
Cracking of wireless networks wikipedia , lookup
Piggybacking (Internet access) wikipedia , lookup
Computer network wikipedia , lookup
Multiprotocol Label Switching wikipedia , lookup
Asynchronous Transfer Mode wikipedia , lookup
List of wireless community networks by region wikipedia , lookup
Airborne Networking wikipedia , lookup
Network tap wikipedia , lookup
Joint Techs / APAN Honolulu Mark Johnson MCNC (NCREN, NCNI, NCLR, …) [email protected] How can an R&E network afford to build an advanced network? Use the obvious strategy of obtaining donations from providers and equipment vendors and the use of grants Make more efficient use of the available (scarce) resources - MORPHnet Infrastructure Experimental Production L3 or breakable gear Production IP service (Cisco Experimental L3 gear Production L2COTS routers) - 10GE and or breakable 3 gear Production Ethernet service Experimental 1GE ports L2-3 gear (Cisco COTS switches) - 1GE or breakable ports L1-3 gear Production point-to-point wave service (Cisco COTS DWDM gear) - 10GE, 1GE, OC192 and OC48 waves Production fiber (1st pair) NLR operated NLR or its production customer or researcher operated Research use Production use Production fiber (2nd pair) Production and experimental infrastructure(MORPHnet concept) and their use Research needing its own dark fiber full spectrum and/or deployment of breakable L1 gear, e.g., optical packet switching, IP-optics unified control plane, 100GE optics Production use of dedicated (multiple) 10G bandwidth, e.g., DTF/ETF cluster supercomputers "backplane" interconnect, federal agency mission use, international connections transit Research needing its own L1 links and/or dedicated 10G bandwidth, e.g., very large MTU performance, XTP implementation Production use for cases where shared IP service is not acceptable but also dedicated 10G waves not needed either, e.g., remote instrument control Research needing its own L2 links with the capability to do complex topologies but where speed is not the primary focus and 1GE or lower ports are sufficient, e.g., multicast routing Production use for higher ed and K-12 AUP-free commodity Internet access and inter-GigaPoP transit backup Research based on measurements of real user Internet traffic (and not just univ-to-univ traffic) and visibility into Internet BGP for the first time since NSFnet Infrastructure Use Examples What does a user want from an optical network? An end-to-end path (lightpath) where the endpoints are not defined by the limits of a single carrier’s network Lightpaths a lightpath is defined to be a fixed bandwidth connection between two network elements, such as IP routers or ATM switches, established via the optical network Ietf draft on lightpath attributes Lightpath attributes It is assumed that a lightpath will have a number of attributes that describe it such as framing, bandwidth, etc Canarie asserts that across a given AS a lightpath may be abstracted to look like a single (possibly blocking) cross-connect switch interface. working examples of Lightpaths All Optical wavelength on WDM system SONET channel Point to point ethernet ATM CBR circuit MPLS LSR with defined QoS Fiberchannel SMPTE 259 G.709 (Digital Wrapper) Problems Intra-domain Provisioning of network capacity across network elements within an AS O&M Inter-domain Provisioning of network capacity across multiple AS’s O&M In this environment the user has to handle performance and fault management Lightpath Carrier A Carrier B Carrier C User desires red path but must negotiate and manage provisioning of green, orange, and blue paths Approaches Methods of defining, provisioning, and modifying existing services within a management domain G.ASTN GMPLS Methods of linking paths from multiple domains UCLP Non-traditional techniques for provisioning capacity between endpoints OBS/JIT GMPLS Generalized MPLS signaling to identify the following path types: Traditionally statistically multiplexed labeled paths such as ATM or Ethernet Time division multiplexed paths such as SONET where timeslots are the label Frequency division multiplexed services such as wavelengths where frequency is the label Space division multiplexed services such as fibers in a bundle where position in the real world is the label Division of labor Control plane Signaling, routing, Protection /restoration Transport Adaptation, Aggregation, Discovery,data integrity, transmission Management Management of Faults, configuration/provisioning, accounting, performance measurement, security Division of labor Network Topology Map Control plane based on IP Routing Forwarding plane IP ATM Today Drawing poorly copied from Cisco Systems Future Optical GMPLS Protocol Diagram LMP RSVP-TE CR-LDP-TE UDP TCP Adaptation Layer SONET Wavelength Switching FIBER BGP OSPF-TE IP MAC/GE ATM Frame Relay UCLP Canarie is developing a system including protocols and directories and registration mechanisms Addresses interdomain issues: Registration of available path components Directory service for those components Provisioning of end to end path which could use intra-domain tools such as GMPLS JIT/OBS view of Optical Network Dilemma Goal: Lower cost by: Minimizing OEO Creating larger transparency islands But: Dedicated is overkill (expensive) Low speed apps. need fine grain mux capability And: Existing fine grain multiplexing today requires electronics hence OEO conversion Technology gap Gauger et al., “Determining offset times in optical burst switching networks”, COST 266, Zagreb, June 2001. Requires optical buffers Immature, expensive, low density Buffers in net lead to complexity IP is a COMPLEX protocol Hardware implementations only recently Creates cost and technology barrier Optical Cell Switching TDM dWDM All wavelengths on fiber switched together Pluses and minuses Simpler core network Need chromatic time correction Requires frame synchronization Low utilization of wavelenghts Lucent is major proponent Three Competing Ideas Need for optical buffering Synchronization Relative timeline to commercial viability Relative complexity comments OPS critical No Not in our career lifetime Complex switching and protocol Requires optical logic OCS no yes 5 to 10 years Simplest switch core, with most complex line cards Requires chromatic and frame allignment, low utilization OBS No longer seen as necessary or desirable Not required for JIT, limited sync for JET 2 years with 20 ms, 7 years with 10 ns reconfiguration Simple line with cards modestly complex switch core Requires demultiplexin g and conversion JIT Fundamental Values Low latency is first priority Tell and go vs. tell and wait May sacrifice link utilization JET and Horizon Aggressive protocol simplification A pox on buffers (optical delay lines) Leads to un-necessary protocol and switch complexities Leads to greater link speed and lower latency Keep data in optics No legacy assumptions Result: high throughput, min. latency and jitter JIT - OBS Approach Switched light path network Large all-optical island No buffers in data channel Avoids immature device technology No buffer overflow in network Data and signaling channel isolation Single out of band signaling channel per fiber Signaling msgs. undergo OEO, processed by intermediate nodes Network intelligence is concentrated at edge SIMPLE protocol implemented in hardware ECOnet Create a confederation of fiber linked NRT projects: BOSnet: MIT Lincoln Labs dark fiber Boston to Washington DC ATDNet: Naval Research Lab (& others) dark fiber within the Washington D.C. metro area ECO-South (proposed): MAX/MCNC/SOX Dark fiber from Washington to Research Triangle Park and then to Atlanta East Coast Optical Network ECOnet MITLL Boston, MA BOSnet MAX/ATDNet Washington, DC ECO-South MCNC/NCREN Raleigh, NC GaTech/SOX Atlanta, GA Illustrates need to evaluate Entire system Fiber, amps, DCUs, maintenance and Rent can become dominant costs Two fiber routes Route A Washington, D.C. to Richmond Richmond to Raleigh Raleigh to Charlotte Charlotte to Atlanta TOTAL Route B Miles 129 170 208 260 ILA's - 1 2 3 3 767 Miles Washington, D.C. to Raleigh 476 Raleigh to Atlanta 563 TOTAL 3R's 1,039 9 3R's - ILA's 9 12 21 Fiber cost Route A Washington, D.C. to Richmond Richmond to Raleigh Raleigh to Charlotte Charlotte to Atlanta Number of Fiber mile Fibers Price 2 2 2 2 $ $ $ $ 500 500 500 500 20 YR IRU Price $ $ $ $ 129,000 170,000 208,000 260,000 $ 767,000 TOTAL fiber totals $ 299,000 $ 468,000 $ 767,000 Route B Washington, D.C. to Raleigh 2 $ - $ - $ - Raleigh to Atlanta 2 $ - $ - $ - - TOTAL - - Amps, colo, maintenance Route A Washington, D.C. to Richmond Richmond to Raleigh Raleigh to Charlotte Charlotte to Atlanta ILA Racks (20ADC) ila rent POP Racks (20ADC) maint POP rent amp HW amps $ 600 $ 21,600 800 9600 $ 29,900 $ 126,000 $ 504,000 $ 600 $ 43,200 800 9600 $ 46,800 $ 126,000 $ 882,000 $ 76,700 TOTAL $ 64,800 $ 1,386,000 Route B Washington, D.C. to Raleigh $ 400 $ 43,200 Raleigh to Atlanta $ 400 $ 57,600 100,800 TOTAL $ 800 9600 0 $ 126,000 $ 1,260,000 800 9600 0 $ 126,000 $ 1,638,000 2,898,000 5 year total Route A Washington, D.C. to Richmond Richmond to Raleigh Raleigh to Charlotte Charlotte to Atlanta TOTAL Route B Washington, D.C. to Raleigh total year 1 total year 2 total year total year 4 total year 5 (dim) (dim) 3 (dim) (dim) (dim) total yr 1-5 624,900 $ 120,900 $ 120,900 $ 120,900 $ 120,900 1,108,500 1,075,200 $ 183,600 $ 183,600 $ 183,600 $ 183,600 1,809,600 1,700,100 304,500 304,500 304,500 304,500 2,918,100 1,312,800 $ 43,200 $ 43,200 $ 43,200 $ 43,200 1,485,600 1,705,200 3,018,000 $ 57,600 100,800 $ 57,600 100,800 $ 57,600 100,800 $ 57,600 100,800 1,935,600 3,421,200 Raleigh to Atlanta TOTAL NCNI WDM Network Duke DCU 15454 Node UNC DCU Network Management Access DCU DCU Cisco RTP MCNC MCNC Dispersion Compensation Unit DCU DCU Engineering Notes: •Ring Circumference = 178.1099km •SMF-28 Fiber NCSU DCU RLGH EDFA amplifier SMJ 5-27-03