Download Wide Area Networks for HEP in the LHC Era Harvey B Newman

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Wide Area Networks for HEP
in the LHC Era
Harvey B Newman
California Institute of Technology
CHEP 2010 conference
Taipei, October 19th, 2010
1
OUTLINE
Global View of Networks from ICFA SCIC Perspective
Continental and Transoceanic Network Infrastructures
Rise of Dark Fiber Networks
DYNES: Dynamic Network System
2
CHEP 2001, Beijing
Harvey B Newman
California Institute of Technology
September 6, 2001
The Internet 2009
ICFA Report 2010 - Main Trends Accelerate:
Dark Fiber Nets, Dynamic Circuits, 40-100G
http://cern.ch/icfa-scic
 Current generation of 10 Gbps network backbones and major Int’l links
arrived in 2002-8 in US, Europe, Japan, Korea; Now China, Brazil
 Bandwidth Growth: from 16 to >10,000X in 7 Yrs. >> Moore’s Law
 Proliferation of 10G links across the Atlantic & Pacific since 2005
Installed Bandwidth for LHC well above 200 Gbps in aggregate
 Rapid Spread of “Dark Fiber” and DWDM: Emergence of Continental, Nat’l,
State & Metro N X 10G “Hybrid” Networks in Many Nations
 Point-to-point “Light-paths” for HEP and “Data Intensive Science”
Now Dynamic Circuits; Managed Bandwidth Channels
 Technology continues to drive Performance Higher, Costs Lower
 Commoditization of GE now 10 GE ports on servers; 40 GE starting
 Cheaper and faster storage (< $100/Tbyte); 100+ Mbyte/sec disks
 Multicore processors with Multi-Gbyte/sec interconnects
 Appearance of terrestrial 40G and 100G MANs/WANs:
40G optical backbones in commercial and R&E networks
100G pilots/tests in 2009-10, first service deployments in 2011
 Transition to 40G, 100G links: by 2011-12 (on land), ~2012-13 (undersea)
 Outlook: Continued growth in bandwidth deployment & use
“Long Dawn” of the Information Age
Revolutions
in with
Networking
1.97B
Internet Users; 550M
Broadband (6/30/10)
http://internetworldstats.com
World Penetration Rates (09/30/09)
 Explosion of bandwidth
use: ~6,000 PBytes/mo North Am.
77%
Australasia
61%
 Rise of broadband
/Oceana
58%
 Rise of Video + Mobile Europe
35%
Latin Am.
Traffic: ~20 Exabytes
30%
Per mo. (64%) by 2013 Mid. East
22%
Asia
 Web 2.0: Billions
11%
Africa
of Web Pages,
29%
World Av.
embedded apps.
0 10 20 30 40 50 60 70 80 %
 Facebook, Twitter,
Skype; 4G Mobile
 Beginnings of Web 3.0:
Social, streaming, SOA;
ubiquitous information
 Broadband as a driver of
modern life: from e-banking
to e-training to e-health
Broadband: 100M+ in China, 84M in US
ITU: Announces A World Broadband Plan 9/2010
Revolutions
in Networking
Closing the New Digital Divide
http://www.broadbandcommission.org
Goal: 50% of World Population with Broadband by 2015
Fixed Broadband
Mobile Broadband
per 100 Inhabitants
per 100 Inhabitants
23% in the developed world 40% in the developed world
3.6% in the developing world 3.1% in the developing world
Broadband Subscribers
by Region
40%
800M
20%
700M
30%
600M
15%
500M
20%
400M 10%
300M
5%
200M
10%
100M
0
0
0
USLHCNet + ESnet4 Today
USLHCNet 10Gb/s
USLHCNet 20 Gb/s
International (high speed)
10 Gb/s SDN core
10G/s IP core
MAN rings (≥ 10 G/s)
Lab supplied links
OC12 / GigEthernet
OC3 (155 Mb/s)
45 Mb/s and less
US-LHCNet Plan
2010-11: 60, 80 Gbps
NY-CHI-GVA-AMS
 Connections to ESnet MANs in NYC
& Chicago



Redundant “light-paths” to BNL and FNAL
10 Gbps peerings with Internet2 (2) and GEANT
Transitioning to 100G in the coming years
GEANT Pan-European
Backbone in 2010
34 NRENs, ~40M Users; 50k km Leased Lines
12k km Dark Fiber; Point to Point Services
GN3 Next Gen. Network Started in June 2009
Dark Fiber Core
Among 19 Countries:
 Austria
 Belgium
 Croatia
 Czech Republic
 Denmark
 Finland
 France
 Germany
 Hungary
 Ireland
 Italy
 Netherlands
 Norway
 Slovakia
 Slovenia
 Spain
 Sweden
 Switzerland
 United Kingdom
8
GLIF 2010 Map DRAFT
A Global Partnership of R&E Networks and
Advanced Network R&D Projects Supporting HEP
A Global Partnership of R&E Networks and Advanced
GLIF
2010
Map
DRAFT
R&D Projects Supporting HEP
~16 10G TransAtlantic Links
in 2010
2011-2015:
ACE; Next gen.
US LHCNet, etc.
October 14, 2009
The National Science
Foundation (NSF)-funded Taj
network has expanded to the
Global Ring Network for
Advanced Application
Development (GLORIAD),
wrapping another ring of
light around the northern
hemisphere for science and
education.
Taj now connects India,
Singapore, Vietnam and
Egypt to the GLORIAD
global infrastructure and
dramatically improves
existing U.S. network links
with China and the Nordic
region.
The new Taj
expansion to
India & Egypt
GLIF 2010 Map DRAFT: Far East View
ASGCNet (tw)
CSTNet (cn)
KREONet/KOREN (kr)
GLORIAD + Taj
JGN2Plus (jp)
TransPAC3
IEEAF
TaiwanLight
KRLight (kr)
T-LeX (jp)
HKOEP
GLIF 2010 Map DRAFT: European View
R&E Networks, Links and GOLEs
MoscowLight
CATLight
CzechLight
CERNLight
NorthernLight
NetherLight
UKLight
SURFNet
NORDUNet
US LHCNet
KAUST
IceLink
GLORIAD
+ Taj Extension
CERN-TIFR
CESNET
GLIF 2010 Map DRAFT: Brazil
RNP-Ipe
RNP Giga
Kyatera
(Sao Paulo)
CLARA/RNP
Innova Red
(br, ar, cl)
REUNA-ESO
AmLight East
AmLight Andes
Dark Fiber in NREN Backbones 2005 – 2008
Greater or Complete Reliance on Dark Fiber:
A Continuing Trend in 2009-10
2005
TERENA Compendium 2009: www.terena.org/activities/compendium/
2008
Cross Border Dark Fiber
Current and Planned: Increasing Use
TERENA
Compendium
SURFNet and NetherLight: 8000 Km Dark Fiber
Flexible Photonic Infrastructure
5 Photonic
Subnets
λ Switching
to 10G;
40GE +100G
Trials
Fixed or
Dynamic
Lightpaths:
LCG, GN3,
EXPRES
DEISA
CineGrid
Cross Border Fibers: to Belgium, on to CERN (1600km);
Erik-Jan Bos
to Germany: X-Win, On to NORDUnet;
Czech Republic: CESNET2 Reconfigurable
Optical Backbone (2010)
2500+ km
Dark Fibers
(since 1999)
N X 10 GbE
Light-Paths
10 GbE CBDF
Slovakia
Poland
Austria
Netherlight
GEANT
H. Sverenyak
Czech Tier2: 1 GE Lightpaths to the Tier1s
at Fermilab, BNL, Karlsruhe and Taiwan
POLAND: PIONIER 6000 km
Dark Fiber Network in 2010
2 X 10G Among 20 Major University Centers
WLCG POLTIER2
Distributed Tier2
(Poznan, Warsaw,
Cracow) Connects
to Karlsruhe Tier1
Cross Border
Dark Fiber Links
to Russia, Ukraine,
Lithuania, Belarus,
Czech Republic,
and Slovakia
R. Lichwala
GARR-X in Italy: Dark Fiber Network
Supporting LHC Tier1 and Nat’l Tier2 Centers
GARR-X
10G Links Among
Bologna Tier1
& 5 Tier2s
Adding 5 More
Sites at 10G
2 x 10G Circuits
to the LHCOPN
Over GEANT
and to Karlsruhe
Via Int’l Tier2 –
Tier1 Circuits
Cross Border Fibers to Karlsruhe (Via CH, DE)
M. Marletta
France: RENATER5 in 2010
Transition to a Dark Fiber Infrastructure
100G Tests Planned in 2010-11, Between
CERN (Geneva) and CC-IN2P3 (Lyon)
SINET4 (Japan) Connecting 700
Universities with Dark Fibers at 1 to 40G
Optical Waves also to Some Universities
S. Suzuki
DYNES: Dynamic Network
System (NSF/MRI ARRA Project)
 PI: Eric Boyd,
Internet2
Deputy CTO
 Co-Pis:
Harvey Newman
(Caltech)
 Paul Sheldon
(Vanderbilt)
 Shawn McKee
(Michigan)
Funded by US
NSF in 2010-12
DYNES Summary
 What is DYNES ?
 A U.S-wide dynamic network “cyber-instrument” spanning
~40 US universities and ~14 Internet2 connectors
 Extends Internet2’s dynamic network service “ION” into U.S. regional
networks and campuses; Also internationally for the LHC program
 Based on the implementation of the Inter-Domain Circuit protocol
developed by ESnet and Internet2; Cooperative development
also with GEANT, GLIF
 Who is it?
 Collaborative team: Internet2, Caltech, Univ. of Michigan, Vanderbilt
 The LHC experiments, astrophysics community, WLCG, OSG
other virtual organizations
 The community of US regional networks and campuses
 What are the goals?
 Support large, long-distance scientific data flows in the LHC, other
programs (e.g. LIGO, Virtual Observatory), & the broader scientific
community
 Build a distributed virtual instrument at sites of interest to the LHC
but available to R&E community generally
Internet2 Dynamic Circuit Network
25
DYNES
The Problem to be Addressed
 Sustained throughputs at 1-10 Gbps (and some > 10 Gbps) are in
production use today by some Tier2s as well as Tier1s
 LHC data volumes and transfer rates are expected to expand by an order
of magnitude over the next several years
 As higher capacity storage and regional, national and transoceanic
40G and 100 Gbps network links become available and affordable.
 Network usage on this scale can only be accommodated with planning,
n appropriate architecture, and national and international community
involvement by
 The LHC groups at universities and labs
 Campuses, regional and state networks connecting to Internet2
 ESnet, US LHCNet, NSF/IRNC, other major networks in US & Europe
 Network resource allocation and data operations need to be consistent
 DYNES will help provide standard services and low cost equipment
to help meet the needs
DYNES: Why Dynamic Circuits ?
 To meet the requirements, Internet2 and ESnet, along with several US
regional networks, US LHCNet, and NRENs and in GEANT in Europe,
have developed a strategy (since with a meeting at CERN, March 2004)
 Based on a ‘hybrid’ network architecture
 Where the traditional IP network backbone is paralleled by a
circuit-oriented core network reserved for large-scale science traffic.
 Major examples are Internet2’s Dynamic Circuit Network
(its “ION Service”) and ESnet’s Science Data Network (SDN),
each of which provides:
 Increased effective bandwidth capacity, and reliability of network access,
by mutually isolating the large long-lasting flows (on ION and/or the
ESnet SDN) and the traditional IP mix of many small flows
 Guaranteed bandwidth as a service by building a system to
automatically schedule and implement virtual circuits traversing
the network backbone, and
 Improved ability of scientists to access network measurement data
for all the network segments end-to-end through the perfSONAR
monitoring infrastructure.
DYNES System Description
AIM: extend hybrid & dynamic capabilities to campus & regional networks.
 A DYNES instrument must provide two basic
capabilities at the Tier 2S, Tier3s and
regional networks:
1. Network resource allocation such as
bandwidth to ensure performance of
the transfer
2. Monitoring of the network and data
transfer performance
 All networks in the path require the ability
to allocate network resources and monitor
the transfer. This capability currently exists
on backbone networks such as Internet2 and
ESnet, but is not widespread at the campus
and regional level.
 In addition Tier 2 & 3 sites require:
Two typical transfers that DYNES
supports: one Tier2 - Tier3 and
another Tier1-Tier2.
3. Hardware at the end sites capable of making
The clouds represent the network
optimal use of the available network resources domains involved in such a transfer.
DYNES: Tier2 and Tier3
Instrument Design
 Each DYNES (sub-)instrument
at a Tier2 or Tier3 site consists
of the following hardware,
where each item has been
carefully chosen to combine
low cost & high performance:
1. An Inter-domain Controller (IDC)
2. An Ethernet switch
3. A Fast Data Transfer (FDT) server.
Sites with 10GE throughput
capability will have a dual-port
Myricom 10GE network interface
in the server.
4. An optional attached disk array
with a Serial Attached SCSI (SAS)
controller capable of several
hundred MBytes/sec to local
storage.
4
3
1
2
5 Gbps with 2 Controllers
The Fast Data Transfer (FDT) server connects to the disk array via
the SAS controller and runs FDT software developed by Caltech.
FDT is an asynchronous multithreaded system that automatically
adjusts I/O and network buffers to achieve maximum network
utilization. The disk array stores datasets to be transferred among
the sites in some cases. The FDT server serves as an aggregator/
throughput optimizer in this case, feeding smooth flows over the
networks directly to the Tier2 or Tier3 clusters. The IDC server
handles the allocation of network resources on the switch, interactions with other DYNES instruments related to network provisioning, and network performance monitoring. The IDC creates
virtual LANs (VLANs) as needed.
Wide Area Networks for HEP
In the LHC Era: Conclusions
 The Internet is undergoing a sustained revolution, as a driver of world
economic progress
 HEP is at the leading edge and is a driver of R&E networks and
inter-regional links in support of our science
 A global community of networks has arisen that support the LHC
program (as a leading user) and science and education broadly
 We are benefitting from the press of global network traffic
 The transition to the next generation of networks will occur just in
time to meet the LHC experiments’ network needs of future years
 We need to build a network-aware Computing Model architecture
 Focused on consistent computing, storage and network operations
 We are moving from static lightpaths to dynamic circuits
crossing continents and oceans, based on standardized
services.
 DYNES will extend these production capabilities to many US
campuses and regional networks; Just the start of a major new trend.
THANK YOU!
[email protected]
31