* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download GRIDs - Desy
Survey
Document related concepts
Zero-configuration networking wikipedia , lookup
SIP extensions for the IP Multimedia Subsystem wikipedia , lookup
Recursive InterNetwork Architecture (RINA) wikipedia , lookup
Distributed firewall wikipedia , lookup
Computer network wikipedia , lookup
Cracking of wireless networks wikipedia , lookup
Deep packet inspection wikipedia , lookup
Piggybacking (Internet access) wikipedia , lookup
Network tap wikipedia , lookup
Service-oriented architecture implementation framework wikipedia , lookup
Transcript
Deutsches Forschungsnetz GRIDs and Networks Complementary Infrastructures for the Scientific Community K.Schauerhammer, K. Ullmann DFN-Verein, Berlin DESY Hamburg, 8.12.2003 GRIDs - networked applications • GRIDs as network application • User requirements on GRIDs • D-GRID The network - basis for GRIDs • Status G-WiN • Challenge X-WiN • Technology Options X-WiN • The Market for Dark Fibre • International Developments • Roadmap X-WiN • Engineering Issues X-WiN GRIDs as network application (1) • GRIDs: Set of network based applications, using distributed resources (compute services, storage services, common data repositories,...), „owned“ by a specific group of users („Virtual Organisation“) • Example: large scale physics experiments organise their data evaluation in a distributed fashion Seite 4 GRIDs as network application (2) LHC GRID TIER 2 - n user TIER 1 Data- and Computeresources Centre 1 Centre 2 Centre n LHC GRID TIER 0 Centre 0 Network access Experiment Seite 5 GRIDs as network application (3) LHC GRID • 4 experiments, start: 2007 • 11 PBytes/year • LHC Community: 5000 physicists, 300 research institutes in 50 countries • lifetime: 15 years Seite 6 User Requirements on GRIDs (1) • Middleware Services like Authentication and Authorisation for ressource access („I want to restrict access to my data to users from my experiment only“) • reliable and (sometimes) SLA guaranteed network access („I want to contribute data from my repositiory to an evaluation algorithm at site x with guaranteed minimum delay“) Seite 7 User Requirements on GRIDs (2) • directories discribing resources • unique user interfaces • scheduling and monitoring facilities for using resources • facilities for error correction • user support and hotline services • dissemination and training of „GRID know how“ Seite 8 User Requirements on GRIDs (3) • such user requirements clearly lead to the notion of a GRID infrastructure because – services are for multiple use (for many user groups / GRIDs) – services have to be reliable (and not experimental) • implication: Services have to be „engineered“ – looks simple but it isn‘t due to ongoing technology changes Seite 9 Seite 10 D-GRID (2) - Middleware • Same questions: – Single sign on for services and network access – AAA-Services – Migration of existing directories • PKI, LDAP, /etc/usr/passwd files • Coordination is a big challenge! • Chance for generating benefit for all communities • Component of eScience Seite 11 D-GRID (3) - NRENs tasks • Provide network and the generic bundle of GRID related middleware services – Reasons: • end users will (potentially) be on multiple GRIDs • economy • NRENs used to offer this sort of infrastructure • problem not trivial (multi domain problem) Seite 12 D-GRID (3a) - NRENs tasks multi domain problem Data 2 NREN2 NREN1 NREN3 Data 1 Geant Data 0 Both application problem and network problem are multi domain and have an international dimension Seite 13 D-GRID (4) - The Initative International activities: – USA: Cyberinfrastructure programme 12 Mio$/y – UK: eScience 100 Mio £/4 Years – NL: Virtual Lab – EU: 6th framework projects (EGEE 32 Mio€/2y) Actual state in DE (until beginning of 2003): – several single projects – low coordination between communities and funding bodies – low representation of common interests of German Research Community Seite 14 D-GRID (5) - The Initative • 3 meetings of representatives of all involved research institutes + DFN + industry + BMBF • Goal of D-GRID: Bundle activities for global, distributed and enhanced research collaboration based on internet-services ===> build an e-science-framework Seite 15 D-GRID (6) - Organisation • D-GRID Board: Hegering (LRZ), Hiller (AWI), Maschuw(FZK, GRIDKa), Reinefeld (ZIB), Resch (HLRS) • Tasks: – to prepare a political strategic statement of the German research community – to build up WGs, to plan MoU – to develop a working program Seite 16 D-GRID (7) - Role of DFN • Role of DFN: – to provide network resource for GRIDs (special GRID-Access to G-WiN) – to provide and support MiddlewareServices (i.e. PKI, AAA) – to participate in developing work program for the next years – to participate in international projects like EGEE and GN2 Seite 17 D-GRID (8) - Role of BMBF • BMBF expects common commitment and cofunding from research organisations and industry • in Q3/04 tender for e-science-projects • BMBF funding announced: 5 - 10 Mio €/y in 2005 - 2008 Seite 18 GRIDs - networked applications • GRIDs as network application • User requirements on GRIDs • D-GRID The Network - basis for GRIDs • Status G-WiN • Challenge X-WiN • Technology Options X-WiN • The Market for Dark Fibre • International Developments • Roadmap X-WiN • Engineering Issues X-WiN G-WiN (1) - General characteristic • 27 nodes distributed in Germany mostly in universities / research labs • core: flexible SDH platform (2,5 G; 10 G) • ~ 500 access lines 128K - 622 M • occasional lambda-links and „dark fiber“ • own IP NOC • special customer driven solutions (VPNs, accesses etc.) are based on the platform • diverse access options including dial-up and dsl (dfn@home) Seite 20 G-WiN (2) - Topology Rostock Global Upstream Core node Kiel Hamburg Oldenburg Braunschweig Hannover Berlin Magdeburg Bielefeld Essen Göttingen Leipzig St. Augustin Dresden Marburg Aachen GEANT Frankfurt 10 Gbit/s 2,4 Gbit/s 2,4 Gbit/s 622 Mbit/s as of 12/03 Ilmenau Würzburg Erlangen Heidelberg Karlsruhe Regensburg Kaiserslautern Augsburg Stuttgart Garching Seite 21 G-WiN (2a) - Extension plans 04 Rostock Core node Kiel Hamburg Global Upstream Oldenburg Braunschweig Hannover Berlin Magdeburg Bielefeld Essen Göttingen 10 Gbit/s 2,4 Gbit/s 2,4 Gbit/s 622 Mbit/s as of Q3/04 Leipzig St. Augustin Dresden Marburg Aachen Frankfurt Ilmenau Würzburg Erlangen Heidelberg Geant 10Gbit/s Karlsruhe Kaiserslautern Regensburg Stuttgart Garching Augsburg Seite 22 G-WiN (2b) - Geant Seite 23 G-WiN (3) - Usage • New Demands: – GRID: „(V)PN“ + Middleware + Applications – „value added“ IP-Services • Examples for new usage patterns: – computer-computer link H-B – Videoconference Service • Volume and growth rate see figure Seite 24 G-WiN (4) Usage figures Entwicklung des importierten Datenvolumens Datenvolumen [Terabyte/Monat] 1.400 1.200 Global Upstream Géant sonstige ISP T-Interconnect DE-CIX 622 Mbit/s 155 Mbit/s 34 Mbit/s 2 Mbit/s 0,128 Mbits 1.000 800 600 400 200 0 Okt 02 Nov 02 Dez 02 Jan 03 Feb 03 Mrz 03 Apr 03 Mai 03 Jun 03 Jul 03 Aug 03 Sep 03 Okt 03 Monat Seite 25 G-WiN (5) - QoS (core) Seite 26 G-WiN (6) - QoS measurements Performance measurements for particle physics community: TCP (GRIDKa/G-WiN/Geant/CERN) between E1 and E2 NetE1 works G-WiN Geant CERN E2 Throughput E1-E2 Router Sequence Flows possible 1G 2,4G 10G 1G Seite 27 G-WiN (6a) - QoS measurements Results UDP-3 UDP-4 TCP-1 TCP-8 Rate send Src-Sink Ka - CERN 980Mbit/s Dto. CERN-Ka Dto. Ka-CERN Dto. Ka-CERN Rate rec. 956Mbit/s 954Mbit/s 510Mbit/s 923Mbit/s Seite 28 Challenges for X-WiN (1) • DFNInternet – low delay (<10 ms) and jitter (< 1 ms) – packet loss extremely low (see measurements) – Throughput per user stream >1 Gbit/s possible – priority option for special applications – Desaster recovery Seite 29 Challenges for X-WiN (2) • Special solutions on demand should be possible • distributed data processing, i.e. GRIDs (radio astronomy, particle physics, biology, data storage, computing ...) • dedicated source - sink characteristic of streams Seite 30 Challenges for X-WiN (3) • • • • • • 10 G links between all nodes Flexible reconfiguration (<7d) cheap ethernet - expensive routers !? High MTBF, MTTR in core and components 24/7 operation of platform Bandwidth on demand if technically and economically feasible Seite 31 Technology Options X-WiN (1) General • There is nothing „magic“ • diverse approaches possible • optimal value adding - options: – SDH/Ethernet as basic platform? – managed lambdas? – managed dark fiber and own WDM? • 24/7 operation of the platform Seite 32 Technology Options X-WiN (2) SDH Ethernet Service • Package containing – Flexibility for reconfiguration – operation staff with 24/7 availability – SLAs with legal bindings • tool box model – n SDH/Ethernet links – specified time lines for reconfiguration – functional extension of tool box possible Seite 33 Technology Options X-WiN (3) Managed Lambdas • Service contains – Lambdas as service – SDH/Ethernet „do it yourself or by others“ – Switched 10G network ("L2-WiN") – Switching in L2-WiN according to user needs – operation L2-WiN 24/7 – Advantage: Shaping according to own needs possible Seite 34 Technology Options X-WiN (4) Managed dark fiber • like managed lambdas, but... – Buy own managed dark fiber (L1-WiN) – WDM „self-made“ value-adding • Filter, optical MUX, EDFA, 3R – 24/7 operation as service? • Advantage: additional bandwidth rather cheap and scalable Seite 35 Market for dark fibre (example) • Example GasLine • LWL along gas pipelines • very high MTBF • Budget offer looks interesting • Lot of user sites along the links • business model possible... Seite 36 International Developments (1) • Hypothesis: "Ethernet-Switches with 10 G Interfaces are stable and cheap." • New generations for research networks – USA (Abilene) – Poland (Pionier) – Czech Republic (Cesnet) – Netherlands (Surfnet) – Canada (Canarie) – .... Seite 37 International Developments (2) Seite 38 International Developments (3) Seite 39 Engineering Issues (1) The traffic matrix • A network can engineering-wise be described by a traffic matrix T where T(i,j) describes the traffic flow requirements between network end points (i) and (j) • T(i,j) (can) map user requirements directly • Every network has an underlying T (explicitly in case of engineered networks or implicitely by „grown“ networks) Seite 40 Engineering Issues (2) Examples for T • G-WiN: Assumption of statist. traffic mix (FT, Visualisation etc.); T(i,j) describes load at peak usage time; bandwidth of a (network) link described by T(i,j) is always 4 times higher than the peak load. • Video Conferencing network: specific requirements in respect to jitter Seite 41 Engineering Issues (3) The „LHC T“ • Experiment evaluation facilities must be available in the middle of the decade • due to a couple of technology dependencies of the evaluation systems the 2005/2006 perspective of T not exactly known today • compromise: T has to be iterated on a 1-2 years basis • ongoing e2e measurements • close cooperation for example in the EGEE context Seite 42 Roadmap X-WiN • Testbed activities (optical testbed „Viola“) (network technology tests in (real) user environments, design input for X-WiN) • At present: meetings with suppliers of – dark fiber, operation, technical components – feasibility study (ready early 2004) • road map – market investigation Q1/04 – Concept until Q3/04; CFP Q4/04 – 2005: Migration G-WiN -> X-WiN: Q4/05 Seite 43