Download International Networks and the US

Document related concepts
no text concepts found
Transcript
HENP Networks and Grids for Global Science
Harvey B. Newman
California Institute of Technology
3rd International Data Grid Workshop
Daegu, Korea August 26, 2004
Challenges for Global HENP Experiments
LHC Example- 2007
5000+ Physicists
250+ Institutes
60+ Countries
BaBar/D0 Example - 2004
500+ Physicists
100+ Institutes
35+ Countries
Major Challenges (Shared with Other Fields)
 Worldwide Communication and Collaboration
 Managing Globally Distributed Computing & Data Resources
 Cooperative Software Development and Data Analysis
Large Hadron Collider (LHC)
CERN, Geneva: 2007 Start
 pp s =14 TeV L=1034 cm-2 s-1
 27 km Tunnel in Switzerland & France
CMS
TOTEM
First Beams:
Summer 2007
Physics Runs:
from Fall 2007
pp, general
purpose; HI
ALICE : HI
Atlas
LHCb: B-physics
Higgs, SUSY, QG Plasma, CP Violation, … the Unexpected
Challenges of Next Generation
Science in the Information Age
Petabytes of complex data explored and analyzed by
1000s of globally dispersed scientists, in hundreds of teams
 Flagship Applications
 High Energy & Nuclear Physics, AstroPhysics Sky Surveys:
TByte to PByte “block” transfers at 1-10+ Gbps
 Fusion Energy: Time Critical Burst-Data Distribution;
Distributed Plasma Simulations, Visualization and Analysis;
Preparations for Fusion Energy Experiment
 eVLBI: Many real time data streams at 1-10 Gbps
 BioInformatics, Clinical Imaging: GByte images on demand
 Provide results with rapid turnaround, coordinating
large but limited computing and data handling resources,
over networks of varying capability in different world regions
 Advanced integrated applications, such as Data Grids,
rely on seamless operation of our LANs and WANs
 With reliable, quantifiable high performance
LHC Data Grid Hierarchy:
Developed at Caltech
CERN/Outside Resource Ratio ~1:2
Tier0/( Tier1)/( Tier2)
~1:1:1
~PByte/sec
~100-1500
MBytes/sec
Online System
Experiment
CERN Center
PBs of Disk;
Tape Robot
Tier 0 +1
Tier 1
10 - 40 Gbps
IN2P3 Center
INFN Center
RAL Center
FNAL Center
~10 Gbps
~10 Gbps
Tier 2
Tier2 Center
Tier2 Center
Tier2 Center
Tier2 CenterTier2 Center
Tier 3
Institute Institute
Physics data cache
Workstations
Institute
Institute
1 to 10 Gbps
Tens of Petabytes by 2007-8.
An Exabyte ~5-7 Years later.
Tier 4
Emerging Vision: A Richly Structured, Global Dynamic System
ICFA and Global Networks
for Collaborative Science
 National and International Networks, with sufficient
(rapidly increasing) capacity and seamless end-to-end
capability, are essential for
 The daily conduct of collaborative work in both
experiment and theory
 Experiment development & construction
on a global scale
 Grid systems supporting analysis involving
physicists in all world regions
 The conception, design and implementation of
next generation facilities as “global networks”
 “Collaborations on this scale would never have been
attempted, if they could not rely on excellent networks”
History of Bandwidth Usage – One Large
Network; One Large Research Site
ESnet Accepted Traffic 1/90 – 1/04
Exponential Growth Since ’92;
ESnet Monthly Accepted Traffic 1/90-1/04
Annual Rate Increased from 1.7 to 2.0X
Per Year In the Last 5 Years
300
200
150
100
50
Jul, 03
Oct, 02
Jan, 02
Apr, 01
Jul, 00
Oct, 99
Jan, 99
Apr, 98
Jul, 97
Oct, 96
Jan, 96
Apr, 95
Jul, 94
Oct,93
Jan, 93
Apr, 92
Jul,91
Oct, 90
0
Jan, 90
TByte/Month
250
SLAC Traffic ~300 Mbps; ESnet Limit
Growth in Steps: ~ 10X/4 Years
Projected: ~2 Terabits/s by ~2014
Int’l Networks BW on Major Links
for HENP: US-CERN Example
 Rate of Progress >> Moore’s Law (US-CERN Example)
 9.6 kbps Analog









(1985)
64-256 kbps Digital
(1989 - 1994)
1.5 Mbps Shared
(1990-3; IBM)
2 -4 Mbps
(1996-1998)
12-20 Mbps
(1999-2000)
155-310 Mbps
(2001-2)
622 Mbps
(2002-3)
2.5 Gbps 
(2003-4)
10 Gbps 
(2005)
4x10 Gbps or 40 Gbps (2007-8)
 A factor of ~1M Bandwidth Improvement over
1985-2005 (a factor of ~5k during 1995-2005)
 A prime enabler of major HENP programs
 HENP has become a leading applications driver,
and also a co-developer of global networks
[X 7 – 27]
[X 160]
[X 200-400]
[X 1.2k-2k]
[X 16k – 32k]
[X 65k]
[X 250k]
[X 1M]
[X 4M]
Internet Growth in the World At Large
Amsterdam Internet Exchange Point Example
5 Minute
Max
30 Gbps
20 Gbps
Some Annual Growth Spurts;
Typically In Summer-Fall
The Rate of HENP Network Usage Growth
(~100% Per Year) is Similar to the World at Large
11.08.04
Average
http://www.guinnessworldrecords.com/
6.6 Gbps
16500km
LSR History – IPv4 single stream
80
5.4 Gbps
7067km
60
2.5 Gbps
0.9 Gbps 10037km
0.4 Gbps 10978km
12272km
40
20
Jun-
04
0
Apr 04
03
Nov-
03
Feb-
Oct 03
02
Monitoring of the Abilene traffic in LA:
Nov-
120
100
4.2 Gbps
5.6 Gbps 16343km
10949km
Apr 02
 Judged on product of transfer
speed and distance end-to-end,
using standard Internet (TCP/IP)
protocols.
 IPv6 record: 4.0 Gbps between
Geneva and Phoenix (SC2003)
 IPv4 Multi-stream record with
Windows & Linux: 6.6 Gbps
between Caltech and CERN (16 kkm;
“Grand Tour d’Abilene”) June 2004
 Exceeded 100 Petabit-m/sec
 Single Stream 7.5 Gbps X 16 kkm
with Linux Achieved in July
 Concentrate now on reliable
Terabyte-scale file transfers
Note System Issues: CPU, PCI-X
Bus, NIC, I/O Controllers, Drivers
June 2004 Record Network
Petabitmeter (10^15 bit*meter)
Internet 2 Land Speed Record (LSR)
Evolving Quantitative Science Requirements for
Networks (DOE High Perf. Network Workshop)
Science Areas
Today
End2End
Throughput
5 years
End2End
Throughput
5-10 Years
End2End
Throughput
Remarks
High Energy
Physics
0.5 Gb/s
100 Gb/s
1000 Gb/s
High bulk
throughput
Climate (Data &
Computation)
0.5 Gb/s
160-200 Gb/s
N x 1000 Gb/s
High bulk
throughput
SNS NanoScience
Not yet started
1 Gb/s
1000 Gb/s + QoS
for Control
Channel
Remote control
and time critical
throughput
Fusion Energy
0.066 Gb/s
(500 MB/s
burst)
0.198 Gb/s
(500MB/
20 sec. burst)
N x 1000 Gb/s
Time critical
throughput
Astrophysics
0.013 Gb/s
(1 TByte/week)
N*N multicast
1000 Gb/s
Computational
steering and
collaborations
Genomics Data &
Computation
0.091 Gb/s
(1 TBy/day)
100s of users
1000 Gb/s + QoS
for Control
Channel
High throughput
and steering
HENP Lambda Grids:
Fibers for Physics
 Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes
from 1 to 1000 Petabyte Data Stores
 Survivability of the HENP Global Grid System, with
hundreds of such transactions per day (circa 2007)
requires that each transaction be completed in a
relatively short time.
 Example: Take 800 secs to complete the transaction. Then
Transaction Size (TB)
Net Throughput (Gbps)
1
10
10
100
100
1000 (Capacity of
Fiber Today)
 Summary: Providing Switching of 10 Gbps wavelengths
within ~2-4 years; and Terabit Switching within 5-8 years
would enable “Petascale Grids with Terabyte transactions”,
to fully realize the discovery potential of major HENP programs,
as well as other data-intensive research.
SCIC in 2003-2004
http://cern.ch/icfa-scic
Three 2004 Reports; Presented to ICFA in February
 Main Report: “Networking for HENP” [H. Newman et al.]
 Includes Brief Updates on Monitoring, the Digital Divide
and Advanced Technologies [*]
 A World Network Overview (with 27 Appendices):
Status and Plans for the Next Few Years of National &
Regional Networks, and Optical Network Initiatives
 Monitoring Working Group Report
[L. Cottrell]
 Digital Divide in Russia
[V. Ilyin]
August 2004 Update Reports at the SCIC Web Site:
See http://icfa-scic.web.cern.ch/ICFA-SCIC/documents.htm
 Asia Pacific, Latin America, GLORIAD (US-Ru-Ko-China);
Brazil, Korea, etc.
SCIC Main Conclusion for 2003
Setting the Tone for 2004
 The disparity among regions in HENP could increase
even more sharply, as we learn to use advanced networks
effectively, and we develop dynamic Grid systems in the
most favored” regions
 We must take action, and work to Close the Digital Divide
 To make Physicists from All World Regions Full
Partners in Their Experiments; and in the Process
of Discovery
 This is essential for the health of our global
experimental collaborations, our plans for future
projects, and our field.
ICFA Report: Networks for HENP
General Conclusions (2)
 Reliable high End-to-end Performance of networked applications such as
large file transfers and Data Grids is required. Achieving this requires:
 End-to-end monitoring extending to all regions serving our community.
A coherent approach to monitoring that allows physicists throughout
our community to extract clear information is required.
 Upgrading campus infrastructures.
These are still not designed to support Gbps data transfers in most HEP
centers. One reason for under-utilization of national and Int’l backbones,
is the lack of bandwidth to end-user groups in the campus.
 Removing local, last mile, and nat’l and int’l bottlenecks
end-to-end, whether technical or political in origin.
While National and International backbones have reached 2.5 to 10 Gbps
speeds in many countries, the bandwidths across borders, the
countryside or the city may be much less.
This problem is very widespread in our community, with
examples stretching from the Asia Pacific to Latin America
to the Northeastern U.S. Root causes for this vary, from lack
of local infrastructure to unfavorable pricing policies.
ICFA Report (2/2004) Update:
Main Trends Continue, Some Accelerate
 Current generation of 2.5-10 Gbps network backbones and major Int’l links







arrived in the last 2-3 Years [US+Europe+Japan; Now Korea and China]
 Capability: 4 to Hundreds of Times; Much Faster than Moore’s Law
Proliferation of 10G links across the Atlantic Now; Will Begin use of
Multiple 10G Links (e.g. US-CERN) Along Major Paths by Fall 2005
 Direct result of Falling Network Prices: $ 0.5 – 1M Per Year for 10G
Ability to fully use long 10G paths with TCP continues to advance:
7.5 Gbps X 16kkm (August 2004)
Technological progress driving equipment costs in end-systems lower
 “Commoditization” of Gbit Ethernet (GbE) ~complete: ($20-50 per port)
10 GbE commoditization (e.g. < $ 2K per NIC with TOE) underway
Some regions (US, Europe) moving to owned or leased dark fiber
Emergence of the “Hybrid” Network Model: GNEW2004; UltraLight, GLIF
Grid-based Analysis demands end-to-end high performance & management
The rapid rate of progress is confined mostly to the US, Europe, Japan and
Korea, as well as the major Transatlantic routes; this threatens to cause
the Digital Divide to become a Chasm
Work on the Digital Divide:
Several Perspectives
 Work on Policies and/or Pricing: pk, in, br, cn, SE Europe, …




 Find Ways to work with vendors, NRENs, and/or Gov’ts
 Exploit Model Cases: e.g. Poland, Slovakia, Czech Republic
Inter-Regional Projects
 GLORIAD, Russia-China-US Optical Ring
 South America: CHEPREO (US-Brazil); EU CLARA Project
 Virtual SILK Highway Project (DESY): FSU satellite links
Workshops and Tutorials/Training Sessions
 For Example: Digital Divide and HEPGrid Workshop,
UERJ Rio, February 2004; Next Daegu May 2005
Help with Modernizing the Infrastructure
 Design, Commissioning, Development
 Tools for Effective Use: Monitoring, Collaboration
Participate in Standards Development; Open Tools
 Advanced TCP stacks; Grid systems
Grid and Network Workshop
at CERN March 15-16, 2004
WORKSHOP GOALS
CONCLUDING
STATEMENT
 Share and challenge the lessons learned by nat’l and
international
projects
in the past
years; Workshop
"Following
the 1st
International
Gridthree
Networking
 Share the current
engineering
and by
(GNEW2004)
that wasstate
heldof
atnetwork
CERN and
co-organized
infrastructureDANTE,
and its likely
near future;
CERN/DataTAG,
ESnet,evolution
Internet2in&the
TERENA,
there is
Examine
our understanding
of the networking
needs of
of
awide
consensus
that hybrid network
services capable
Grid applications
see the ICFA-SCIC reports);
offering
both packet-(e.g.,
and circuit/lambda-switching
as well
 highly
Develop
a vision performance
of how network
engineering and
as
advanced
measurements
and a new
infrastructure will (or should) support Grid computing
generation
distributed
needs inof
the
next threesystem
years. software, will be required in
order to support emerging data intensive Grid applications,
Such as High Energy Physics, Astrophysics, Climate and
Supernova modeling, Genomics and Proteomics, requiring
10-100 Gbps and up over wide areas."
Transition beginning now to optical, multiwavelength Community owned or leased
“dark fiber” networks for R&E
National Lambda Rail (NLR)
SEA
NLR
POR
Coming
SAC
NYC
CHI
OGD
DEN
SVL
CLE
FRE
PIT
KAN
NAS
STR
LAX
RAL
PHO
SDG
WAL
OLG
ATL
DAL
JAC
BOS
WDC
Up Now
Initially 4 10G
Wavelengths
Northern Route
Operation by 4Q04
Internet2 HOPI
Initiative (w/HEP)
To 40 10G
Waves in Future
nl, de, pl, cz,jp
18 US States

15808 Terminal, Regen or OADM site
Fiber route
JGN2: Japan Gigabit Network (4/04 – 3/08)
20 Gbps Backbone, 6 Optical Cross-Connects
[Legends ]
20Gbps
10Gbps
1Gbps
Optical testbeds
Access points
<10G>
・Ishikawa Hi-tech Exchange Center
(Tatsunokuchi-machi, Ishikawa Prefecture)
<100M>
・Toyama Institute of Information Systems
(Toyama)
・Fukui Prefecture Data Super Highway AP * (Fukui)
Core network nodes
<1G>
・Teleport Okayama
(Okayama)
・Hiroshima University (Higashi
Hiroshima)
<100M>
・Tottori University of
Environmental Studies (Tottori)
・Techno Ark Shimane
(Matsue)
・New Media Plaza Yamaguchi
(Yamaguchi)
<10G>
・Kyushu University (Fukuoka)
<100M>
・NetCom Saga
(Saga)
・Nagasaki University
(Nagasaki)
・Kumamoto Prefectural Office
(Kumamoto)
・Toyonokuni Hyper Network AP
*(Oita)
・Miyazaki University (Miyazaki)
・Kagoshima University
(Kagoshima)
<10G>
・Kyoto University
(Kyoto)
・Osaka University
(Ibaraki)
<1G>
・NICT Kansai Advanced Research Center (Kobe)
<100M>
・Lake Biwa Data Highway AP *
(Ohtsu)
・Nara Prefectural Institute of Industrial
Technology (Nara)
・Wakayama University
(Wakayama)
・Hyogo Prefecture Nishiharima Technopolis
(Kamigori-cho, Hyogo Prefecture)
Sapporo
<100M>
・Niigata University
(Niigata)
・Matsumoto Information
Creation Center
(Matsumoto,
Nagano Prefecture)
Sendai
Kanazawa
Nagano
Osaka
NICT Koganei
Headquarters
Okayama
Kochi
Okinawa
<100M>
・Kagawa Prefecture Industry Promotion
Center (Takamatsu)
・Tokushima University (Tokushima)
・Ehime University (Matsuyama)
・Kochi University of Technology
(Tosayamada-cho, Kochi Prefecture)
NICT Keihannna Human
Info-Communications
Research Center
<1G>
・Tohoku University
(Sendai)
・NICT Iwate IT Open Laboratory
(Takizawa-mura, Iwate Prefecture)
<100M>
・Hachinohe Institute of Technology
(Hachinohe, Aomori Prefecture)
・Akita Regional IX * (Akita)
・Keio University Tsuruoka Campus
(Tsuruoka, Yamagata Prefecture)
・Aizu University
(Aizu Wakamatsu)
<10G>
・Tokyo University
Fukuoka
NICT Kita Kyushu IT
Open Laboratory
<100M>
・Hokkaido Regional Network Association
AP *
(Sapporo)
Nagoya
<100M>
・Nagoya University (Nagoya)
・University of Shizuoka (Shizuoka)
・Softopia Japan (Ogaki, Gifu Prefecture)
・Mie Prefectural College of Nursing (Tsu)
NICT Tsukuba
Research Center
(Bunkyo Ward, Tokyo)
・NICT Kashima Space Research Center
(Kashima, Ibaraki Prefecture)
<1G>
・Yokosuka Telecom Research Park
(Yokosuka, Kanagawa Prefecture)
<100M>
・Utsunomiya University (Utsunomiya)
・Gunma Industrial Technology Center
(Maebashi)
・Reitaku University
(Kashiwa, Chiba Prefecture)
・NICT Honjo Information and
Communications Open Laboratory
(Honjo, Saitama Prefecture)
・Yamanashi Prefecture Open R&D
Center
(Nakakoma-gun, Yamanashi Prefecture)
Otemachi
USA
*IX:Internet eXchange
AP:Access Point
APAN-KR : KREONET/KREONet2 II
UltraLight Collaboration:
http://ultralight.caltech.edu
 Caltech, UF, FIU,
UMich, SLAC,FNAL,
MIT/Haystack,
CERN, UERJ(Rio),
NLR, CENIC, UCAID,
Translight, UKLight,
Netherlight, UvA,
UCLondon, KEK,
Taiwan
 Cisco, Level(3)
 Integrated hybrid experimental network, leveraging Transatlantic
R&D network partnerships; packet-switched + dynamic optical paths
 10 GbE across US and the Atlantic: NLR, DataTAG, TransLight,
NetherLight, UKLight, etc.; Extensions to Japan, Taiwan, Brazil
 End-to-end monitoring; Realtime tracking and optimization;
Dynamic bandwidth provisioning
 Agent-based services spanning all layers of the system, from the
optical cross-connects to the applications.
GLIF: Global Lambda Integrated Facility
“GLIF is a World Scale
Lambda based Lab for
Application and
Middleware
development, where
Grid applications ride on
dynamically configured
networks based on
optical wavelengths ...
Coexisting with more
traditional packetswitched network traffic
4th GLIF Workshop:
Nottingham UK Sept.
2004
10 Gbps Wavelengths For R&E Network
Development Are Prolifering,
Across Continents and Oceans
PROGRESS in SE Europe (Sk, Pl, Cz, Hu, …)
1660 km of Dark
Fiber CWDM Links,
up to 112 km.
1 to 4 Gbps (GbE)
August 2002:
First NREN in
Europe to establish
Int’l GbE Dark Fiber
Link, to Austria
April 2003 to Czech
Republic.
Planning 10 Gbps
Backbone; dark
fiber link to Poland
this year.
Dark Fiber in Eastern Europe
Poland: PIONIER Network
2650 km Fiber
Connecting
16 MANs; 5200 km
and 21 MANs by 2005
GDAŃS K
KOS ZALIN
OLS ZTYN
S ZCZECIN
BYDGOS ZCZ
BIAŁYS TOK
TORUŃ
POZNAŃ
Support
 Computational Grids
Domain-Specific
Grids
 Digital Libraries
 Interactive TV
 Add’l Fibers for
WARS ZAWA
GUBIN
ZIELONA
GÓRA
S IEDLCE
ŁÓDŹ
PUŁAWY
WROCŁAW
RADOM
CZĘS TOCHOWA
KIELCE
OPOLE
GLIWICE
KATOWICE
KRAKÓW
CIES ZYN
BIELS KO-BIAŁA
e-Regional Initiatives
Ins ta lle d fibe r
P IONIER node s
Fibe rs pla nne d in 2004
P IONIER node s pla nne d in 2004
RZES ZÓW
LUBLIN
The Advantage of Dark Fiber
CESNET Case Study (Czech Republic)
1 x 2,5G
2513 km
Leased Fibers
(Since 1999)
about 150km (e.g. Ústí n.L. - Liberec)
about 300km (e.g. Praha - Brno)
*
**
4 x 2,5G
Case Study Result
Wavelength Service
Vs. Fiber Lease:
Cost Savings of
50-70% Over 4 Years
for Long 2.5G
or 10G Links
about 150km (e.g. Ústí n.L. - Liberec)
about 300km (e.g. Praha - Brno)
*
**
1 x 10G
about 150km (e.g. Ústí n.L. - Liberec)
about 300km (e.g. Praha - Brno)
*
**
4 x 10G
about 150km (e.g. Ústí n.L. - Liberec)
about 300km (e.g. Praha - Brno)
*
**
Leased 1 x 2,5G
(EURO/Month)
7,000
8,000
Leased fibre with own equipment
(EURO/Month)
5 000 *
7 000 **
2 x booster 18dBm
2 x booster 27dBm + 2 x preamplifier + 6 x DCF
Leased 4 x 2,5G
(EURO/Month)
14,000
23,000
Leased fibre with own equipment
(EURO/Month)
8 000 *
11 000 **
2 x booster 24dBm, DWDM 2,5G
2 x (booster +In-line + preamplifier), 6 x DCF, DWDM 2,5G
Leased 1 x 10G
(EURO/Month)
14,000
16,000
Leased fibre with own equipment
(EURO/Month)
5 000 *
8 000 **
2 x booster 21dBm, 2 x DCF
2 x (booster 21dBm + in-line + preamplifier) + 6 x DCF
Leased 4 x 10G
(EURO/Month)
29,000
47,000
Leased fibre with own equipment
(EURO/Month)
12 000 *
14 000 **
2 x booster 24dBm, 2 x DCF, DWDM 10G
2 x (booster +In-line + preamplifier), 6 x DCF, DWDM 10G
ICFA/SCIC Network Monitoring
Prepared by Les Cottrell, SLAC, for
ICFA
www.slac.stanford.edu/grp/scs/net/talk03/icfa-aug04.ppt
PingER: World View from SLAC
 Now monitoring 650 sites in 115 countries
 In last 9 months:
 Several sites in Russia (GLORIAD)
 Many hosts in Africa (27 of 54 Countries)
 Monitoring sites in Pakistan, Brazil
TCP throughput measured from N. America
From the PingER project, Aug 2004
to World Regions
10000
C. Asia (8)
Latin America (37)
50% Improvement/year
~ factor of 10 in < 6 years
10000
Edu (141)
1000
1000
Europe(150)
Canada (27)
Mid East (16)
S.E.
Europe (21)
10
100
100
10
Caucasus (8)
Important for policy makers
Dec-04
Dec-03
1
Dec-02
Dec-01
Africa (30)
Dec-00
Dec-99
India(7)
Dec-98
Dec-97
Dec-96
Jan-96
China (13)
Russia(17)
1
Jan-95
Derived TCP throughput in KBytes/sec
View from
CERN
Confirms
This View
C. Asia, Russia, SE Europe, L.
America, M. East, China: 45 yrs behind
India, Africa: 7 yrs behind
Research Networking in Latin
America: August 2004
 The only Countries with research
network connectivity now in
Latin America:
Argentina, Brazil, Chile,
Mexico, Venezuela
 AmPath Provided connectivity for
some South American countries
 New Sao Paolo-Miami Link at 622 Mbps
Starting This Month
AmPath
New: CLARA (Funded by EU)
 Regional Network Connecting 19 Countries:
Argentina
Brasil
Bolivia
Chile
Colombia
Costa Rica
Cuba
Dominican Republic
Ecuador
El Salvador
Guatemala
Honduras
Mexico
Panama
Paraguay
Peru
Uruguay
Venezuela
Nicaragua
155 Mbps Backbone with 10-45 Mbps Spurs;
4 Mbps Satellite to Cuba; 622 Mbps to Europe
Also NSF Proposals To
Connect at 2.5G to US
HEPGRID (CMS) in Brazil
HEPGRID-CMS/BRAZIL is a project to build a Grid that
At Regional Level will include CBPF,UFRJ,UFRGS,UFBA, UERJ & UNESP
At International Level will be integrated with CMS Grid based at CERN;
focal points include iVGDL/Grid3 and bilateral projects with Caltech Group
Brazilian HEPGRID
On line
systems
T0 +T1
2.5 - 10
Gbps
CERN
T1
France
Germany
UNESP/USP
SPRACE-Working
T3 T2 UFRGS
UERJ:
T2T1,
100500
Nodes;
Plus T2s to
100 Nodes
Italy
BRAZIL
622 Mbps
UERJ Regional
Tier2 Ctr
USA
T2 T1
Gigabit
UERJ
CBPF
UFBA
UFRJ
T4
Individual
Machines
Latin America Science Areas Interested
in Improving Connectivity ( by Country)
Subject
Argentina
Brazil
Chile
Colombia
Costa
Rica
Equator
Astrophysics
e-VLBI
High Energy
Physics
Geosciences
Marine
sciences
Health and
Biomedical
applications
Environmental
studies
Networks and Grids: The Potential to Spark
a New Era of Science in the Region
Mexico
Asia Pacific Academic Network Connectivity
APAN Status July 2004
RU

Europe
200M
34M
Connectivity to
US from JP, KO,
AU is Advancing
Rapidly.
Progress in the
Region, and to
Europe is Much
Slower
CN

2G
KR
155M 
1.2G
310M
 TW
`722M
HK 
TH 
 LK
45M 
90M
155M
932M
(to 21 G)
 ID
2.5M
Access Point
Exchange Point
Current Status
2004 (plan)
45M
7.5M
 PH
1.5M
155M 1.5M
VN
MY 2M

12M
SG
2M
US
9.1G
622M
777M
 IN
20.9G
JP
16M
AU 
Better North/South Linkages within Asia
JP-SG link: 155Mbps in 2005 is proposed to NSF by CIREN
JP- TH link: 2Mbps  45Mbps in 2004 is being studied.
CIREN is studying an extension to India
APAN Link Information










Countries
AU-US
AU-US (PAIX)
CN-HK
CN-HK
CN-JP
CN-JP
CN-US
CN-US
HK-US
HK-TW
IN-US/UK
JP-ASIA
JP-ID
JP-KR
JP-LA
JP-MY
JP-PH
JP-PH
JP-SG
JP-TH
JP-TH
JP-US
Network
AARNet
AARNet
CERNET
CSTNET
CERNET
CERNET
CERNET
CSTNET
HARNET
HARNET/TANET/ASNET
ERNET
UDL
AI3(ITB)
APII
AI3 (NUOL)
AI3(USM)
AI3(ASTI)
MAFFIN
AI3(TP)
AI3(AIT)
SINET(ThaiSarn)
TransPac
Bandwidth (Mbps)
310 to 2 x 10 Gbps soon
622
622
155
155
45
155
155
45
100
16
9
0.5/1.5
2Gbps
0.128/0.128
1.5/0.5
1.5/0.5
6
1.5/0.5
(service interrupted)
2
5 Gbps to 2x10 Gbps soon
2004.7.7 [email protected]
AUP/Remark
R&E + Commodity
R&E + Commodity
R&E + Commodity
R&E
R&E
Native IPv6
R&E + Commodity
R&E
R&E
R&E
R&E
R&E
R&E
R&E
R&E
R&E
R&E
Research
R&E
R&E
R&E
R&E
APAN Link Information





Countries
(JP)-US-EU
JP-US
JP-US
JP-US
JP-US
JP-VN
KR-FR
KR-SG
KR-US
LK-JP
MY-SG
SG-US
TH-US
TW-HK
TW-JP
TW-SG
TW-US
(TW)-US-NL
Network
SINET
SINET
IEEAF
IEEAF
Japan-Hawaii
AI3(IOIT)
KOREN/RENATER
APII
KOREN/KREONet2
LEARN
NRG/SICU
SingaREN
Uninet
ASNET/TANET/TWAREN
ASNET/TANET
ASNET/SingAREN
ASNET/TANET/TWAREN
ASNET/TANET/TWAREN
Bandwidth (Mbps)
155
5 Gbps
10Gbps
622
155
1.5/0.5
34
8
1.2Gbps
2.5
2
90
155
622
622
155
6.6 Gbps
2.5 Gbps
2004.7.7 [email protected]
AUP/Remark
R&E / No Transit
R&E / No Transit
R&E wave service
R&E
R&E
R&E
Research (TEIN)
R&E
R&E
R&E
Experiment (Down)
R&E
R&E
R&E
R&E
R&E
R&E
R&E
APAN Recommendations
(at July 2004 Meeting in CAIRNS, Au)
Central Issues for APAN this decade
 Stronger linkages between applications and infrastructure neither can exist independently
 Stronger application and infrastructure linkages among APAN
members.
 Continuing focus on APAN as an organization that represents
infrastructure interests in Asia
 Closer connection between APAN the infrastructure &
applications organization and regional political organizations
(e.g. APEC, ASEAN)
New issues demand attention
 Application measurement, particularly end-to-end network
performance measurement is increasingly critical (deterministic
networking)
 Security must now be a consideration for every application and
every network.
KR-US/CA Transpacific connection
 Participation in Global-scale Lambda Networking
 Two STM-4 circuits (1.2G) : KR-CA-US
 Global lambda networking : North America, Europe,
Asia Pacific, etc.
Global Lambda
Networking
KREONET/SuperSIReN
CA*Net4
StarLight
STM-4 * 2
Chicago
APII-testbed/KREONet2
Seattle
PacWave
New Record!!! 916 Mbps from CHEP to Caltech (22/06/’04)
Subject:
UDP test on KOREN-TransPAC-Caltech
Date:
Tue, 22 Jun 2004 13:47:25 +0900
From:
"Kihwan Kwon" <[email protected]>
To:
<[email protected]>
[root@sul Iperf]# ./iperf -c socrates.cacr.caltech.edu -u -b 1000m
-----------------------------------------------------------Client connecting to socrates.cacr.caltech.edu, UDP port 5001
Sending 1470 byte datagrams; UDP buffer size: 64.0 KByte (default)
-----------------------------------------------------------[ 5] local 155.230.20.20 port 33036 connected with 131.215.144.227
[ ID] Interval
Transfer
Bandwidth
[ 5] 0.0-2595.2 sec 277 GBytes 916 Mbits/sec
USA TransPAC
KNU/Korea
Max. 947.3Mbps
G/H-Japan
Global Ring Network for Advanced Applications Development
 OC3 circuits Moscow-Chicago-
Beijing since January 2004
 OC3 circuit Moscow-Beijing July
2004 (completes the ring)
 Korea (KISTI) joining US, Russia,
China as full partner in GLORIAD
 Plans developing for Central Asian
extension (w/Kyrgyz Government)
 Rapid traffic growth with heaviest
US use from DOE (FermiLab), NASA,
NOAA, NIH and Universities
(UMD, IU, UCB, UNC, UMN, PSU,
Harvard, Stanford, Wash., Oregon,
250+ others)
Aug. 8 2004: P.K. Young,
Korean IST Advisor to
President Announces
 Korea Joining GLORIAD
 TEIN gradually to 10G,
connected to GLORIAD
 Asia Pacific Info. InfraStructure (1G) will be
backup net to GLORIAD
> 5TBytes now transferred monthly
via GLORIAD to US, Russia, China
GLORIAD 5-year Proposal Pending (with US NSF) for expansion: 2.5G MoscowAmsterdam-Chicago-Seattle-Hong Kong-Pusan-Beijing circuits early 2005; 10G ring
around northern hemisphere 2007; multiple wavelength service 2009 – providing hybrid
circuit-switched (primarily Ethernet) and routed services
Internet in China
(J.P.Wu APAN July 2004)
Internet users in China:
from 6.8 Million to 78 Million within 6 months
Total
Wireline
Dial Up
ISDN
68M
23.4M
45.0M
4.9M
IP Addresses: 32M(1A+233B+146C)
Backbone:2.5-10G DWDM+Router
International links:20G
Exchange Points:> 30G(BJ,SH,GZ)
Last Miles

Broad
band
9.8M
Ethernet,WLAN,ADSL,CTV,CDMA,ISDN,GPRS,
Dial-up
Need IPv6
China: CERNET Update
1995, 64K Nation wide backbone connecting
8 cities, 100 Universities
1998, 2M Nation wide backbone connecting
20 cities, 300 Universities
2000, Own dark fiber crossing 30+ major
cities and 30,000 kilometers
2001, CERNET DWDM/SDH network finished
2001, 2.5G/155M Backbone connecting 36
cities, 800 universities
2003,1300 + universities and institutes, over
15 million users
CERNET2 and Key Technologies
CERNET 2: Next Generation Education and
Research Network in China
CERNET 2 Backbone connecting 15-20
GigaPOPs at 2.5G-10Gbps (I2-like Model)
Connecting 200 Universities and 100+
Research Institutes at 1Gbps-10Gbps
Native IPv6 and Lambda Networking
Support/Deployment of the following
technologies:



E2E performance monitoring
Middleware and Advanced Applications
Multicast
AFRICA: Key Trends
M. Jensen and P. Hamilton Infrastructure Report, March 2004
 Growth in traffic and lack of infrastructure Predominance of Satellite;
But these satellites are heavily subscribed
 Int’l Links: Only ~1% of traffic on links is for Internet connections;
Most Internet traffic (for ~80% of countries) via satellite
 Flourishing Grey market for Internet & VOIP traffic using VSAT dishes
 Many Regional fiber projects in “planning phase” (some languished in
the past); Only links from South Africa to Nimibia, Botswana done so far
 Int’l fiber Project: SAT-3/WASC/SAFE Cable from South Africa to Portugal
Along West Coast of Africa
 Supplied by Alcatel to Worldwide Consortium of 35 Carriers
 40 Gbps by Mid-2003; Heavily Subscribed. Ultimate Capacity 120 Gbps
 Extension to Interior Mostly by Satellite: < 1 Mbps to ~100 Mbps typical
Note: World Conference on Physics and Sustainable Development,
10/31 – 11/2/05 in Durban South Africa; Part of World Year of Physics 2005.
Sponsors: UNESCO, ICTP, IUPAP, APS, SAIP
AFRICA: Nectar Net Initiative
Growing Need to connect academic researchers, medical
researchers & practitioners to many sites in Africa
Examples:
 CDC & NIH: Global AIDS Project, Dept. of Parasitic Diseases,




Nat’l Library of Medicine (Ghana, Nigeria)
Gates $ 50M HIV/AIDS Center in Botswana; Project Coord at Harvard
Africa Monsoon AMMA Project, Dakar Site [cf. East US Hurricanes]
US Geological Survey: Global Spatial Data Infrastructure
Distance Learning: Emory-Ibadan (Nigeria); Research
Channel Content
But Africa is Hard: 11M Sq. Miles, 600 M People, 54 Countries
 Little Telecommunications Infrastructure
Approach: Use SAT-3/WASC Cable (to Portugal), GEANT
Across Europe, Amsterdam-NY Link Across the Atlantic,
then Peer with R&E Networks such as Abilene in NYC
 Cable Landings in 8 West African Countries and South Africa
W. Matthews
 Pragmatic approach to reach end points:
VSAT satellite, ADSL, microwave, etc.
Georgia Tech
Bandwidth prices in Africa vary dramatically; are in general
many times what they could be if universities purchase in volume
Sample Bandwidth Costs for African Universities
Nigeria
$20.00
Average
$11.03
Uganda
$9.84
Ghana
$6.77
IBAUD Target
USA
$3.00
$0.27
$0.00
$5.00
$10.00
$15.00
$20.00
$25.00
$/kbps/month
Sample size of 26 universities
Average Cost for VSAT service: Quality, CIR,
Rx, Tx not distinguished
Roy Steiner
Internet2 Workshop
Grid2003: An Operational
Production Grid, Since October 2003



27 sites (U.S., Korea)
2300-2800 CPUs
700-1100 Concurrent Jobs
www.ivdgl.org/grid2003
Trillium:
PPDG
GriPhyN
iVDGL
Korea
Prelude to Open Science Grid: www.opensciencegrid.org
HENP Data Grids, and Now
Services-Oriented Grids
 The original Computational and Data Grid concepts are
largely stateless, open systems
 Analogous to the Web
 The classical Grid architecture had a number of implicit
assumptions
 The ability to locate and schedule suitable resources,
within a tolerably short time (i.e. resource richness)
 Short transactions with relatively simple failure modes
 HENP Grids are Data Intensive & Resource-Constrained
 Resource usage governed by local and global policies
 Long transactions; some long queues
 Analysis: 1000s of users competing for resources
at dozens of sites: complex scheduling; management
 HENP Stateful, End-to-end Monitored and Tracked Paradigm
 Adopted in OGSA [Now WS Resource Framework]
Increased functionality,
standardization
The Move to OGSA and then
Managed Integrated Systems
~Integrated Systems
Web services + …
X.509,
LDAP,
FTP, …
Custom
solutions
App-specific
Services
Open Grid
Web Services
Services Arch
Resrc Framwk
Stateful;
Managed
GGF: OGSI, …
(+ OASIS, W3C)
Globus Toolkit Multiple implementations,
including Globus Toolkit
Defacto standards
GGF: GridFTP, GSI
Time
Managing Global Systems: Dynamic
Scalable Services Architecture
MonALISA: http://monalisa.cacr.caltech.edu
24 X 7 Operations
Multiple Orgs.
 Grid2003
 US CMS
 CMS-DC04
 ALICE
 STAR
 VRVS
 ABILENE
 Soon: GEANT
 + GLORIAD
 “Station Server”
Services-engines
at sites host many
“Dynamic Services”
 Scales to
thousands of
service-Instances
 Servers autodiscover
and interconnect
dynamically to form
a robust fabric
 Autonomous agents
+ CLARENS: Web Services Fabric and Portal Architecture
Grid Analysis Environment
CLARENS: Web Services Architecture
Analysis
Client
Analysis
Client
 Analysis Clients talk
Analysis
Client
HTTP, SOAP, XML/RPC
Grid Services
Web Server
Scheduler
Catalogs
FullyAbstract
Planner
Metadata
PartiallyAbstract
Planner
FullyConcrete
Planner
Data
Management
Virtual
Data
Monitoring
Replica
Execution
Priority
Manager
Grid Wide
Execution
Service
Caltech GAE Team
Applications
standard protocols to
the CLARENS “Grid
Services Web Server”,
with a simple Web
service API
 The secure Clarens
portal hides the
complexity of the Grid
Services from the client
 Key features: Global
Scheduler, Catalogs,
Monitoring, and Gridwide Execution service;
Clarens servers form
a Global Peer to peer
Network
52
World Summit on the Information Society
(WSIS): Geneva 12/2003 and Tunis in 2005
 The UN General Assembly adopted in 2001 a resolution
endorsing the organization of the World Summit on the
Information Society (WSIS), under UN Secretary-General,
Kofi Annan, with the ITU and host governments taking
the lead role in its preparation.
 GOAL: To Create an Information Society:
A Common Definition was adopted
in the “Tokyo Declaration” of January 2003:
“… One in which highly developed ICT networks, equitable
and ubiquitous access to information, appropriate content
in accessible formats and effective communication can
help people achieve their potential”
 Kofi Annan Challenged the Scientific Community to Help (3/03)
 CERN and ICFA SCIC have been quite active in the WSIS in
Geneva (12/2003)
Role of Science in the Information
Society; WSIS 2003-2005
 HENP Active in WSIS
 CERN RSIS Event
 SIS Forum & CERN/Caltech
Online Stand at WSIS I
(Geneva 12/03)
 Visitors at WSIS I
 Kofi Annan, UN Sec’y General
 John H. Marburger,
Science Adviser to US President
 Ion Iliescu, President of Romania;
and Dan Nica, Minister of ICT
 Jean-Paul Hubert, Ambassador
of Canada in Switzerland
 …
 Planning Underway for
WSIS II: Tunis 2005
HEPGRID and Digital Divide Workshop
UERJ, Rio de Janeiro, Feb. 16-20 2004
Theme: Global Collaborations, Grids and
Their Relationship to the Digital Divide
NEWS:
Bulletin: ONE TWO
WELCOME BULLETIN
General Information
Registration
Travel Information
Hotel Registration
Participant List
How toTutorials
Get UERJ/Hotel

C++ Accounts
Computer

GridPhone
Technologies
Useful
Numbers
Program

Grid-Enabled
Contact
us:
Analysis
Secretariat

Networks
Chairmen

Collaborative
ICFA, understanding the vital role of these issues
for our field’s future, commissioned the Standing
Committee on Inter-regional Connectivity (SCIC) in
1998, to survey and monitor the state of the
networks used by our field, and identify problems.
For the past three years the SCIC has focused on
understanding and seeking the means of reducing
or eliminating the Digital Divide, and proposed to
ICFA that these issues, as they affect our field of
High Energy Physics, be brought to our community
for discussion. This led to ICFA’s approval, in July
2003, of the Digital Divide and HEP Grid Workshop.
More Information:
http://www.lishep.uerj.br
SPONSORS
Systems
CLAF
CNPQ
FAPERJ
UERJ
Sessions &
Tutorials Available
(w/Video) on
the Web
International ICFA Workshop on HEP
Networking, Grids and Digital Divide
Issues for Global e-Science
Proposed Workshop Dates: May 23-27, 2005
Venue: Daegu, Korea
Dongchul Son
Center for High Energy Physics
Kyungpook National University
ICFA, Beijing, China
Aug. 2004
ICFA Approval
Requested Today
International ICFA Workshop on HEP Networking, Grids
and Digital Divide Issues for Global e-Science
 Themes
 Networking, Grids, and Their Relationship to the Digital Divide for
HEP as Global e-Science
 Focus on Key Issues of Inter-regional Connectivity
 Mission Statement
 ICFA, understanding the vital role of these issues for our field’s
future, commissioned the Standing Committee on Inter-regional
Connectivity (SCIC) in 1998, to survey and monitor the state of the
networks used by our field, and identify problems. For the past
three years the SCIC has focused on understanding and seeking
the means of reducing or eliminating the Digital Divide, and
proposed to ICFA that these issues, as they affect our field of
High Energy Physics, be brought to our community for
discussion.
This workshop, the second in the series begun with the the 2004
Digital Divide and HEP Grid Workshop in Rio de Janeiro
(approved by ICFA in July 2003) will carry forward this work while
strengthening the involvement of scientists, technologists and
governments in the Asia Pacific region.
고에너지물리연구센터
CENTER FOR HIGH ENERGY PHYSICS
International ICFA Workshop on HEP Networking, Grids
and Digital Divide Issues for Global e-Science
 Workshop Goals
 Review the current status, progress and barriers to the effective use
of the major national, continental and transoceanic networks used
by HEP
 Review progress, strengthen opportunities for collaboration, and
explore the means to deal with key issues in Grid computing and
Grid-enabled data analysis, for high energy physics and other fields
of data intensive science, now and in the future
 Exchange information and ideas, and formulate plans to develop
solutions to specific problems related to the Digital Divide in
various regions, with a focus on Asia Pacific, Latin America,
Russia and Africa
 Continue to advance a broad program of work on reducing or
eliminating the Digital Divide, and ensuring global collaboration,
as related to all of the above aspects.
고에너지물리연구센터
CENTER FOR HIGH ENERGY PHYSICS
Networks and Grids, GLORIAD,
ITER and HENP
 Network backbones and major links used by major experiments
in HENP and other fields are advancing rapidly
 To the 10 G range in < 2 years; much faster than Moore’s Law
 New HENP and DOE Roadmaps: a factor ~1000 improvement per decade
 We are learning to use long distance 10 Gbps networks effectively
 2003-2004 Developments: to 7.5 Gbps flows over 16 kkm
 Important advances in Asia-Pacific, notably Korea
 A transition to community-owned and operated R&E networks
is beginning (us, ca, nl, pl, cz, sk …) or considered (de, ro, …)
 We Must Work to Close to Digital Divide
 Allowing Scientists and Students from All World Regions
to Take Part in Discoveries at the Frontiers of Science
 Removing Regional, Last Mile, Local Bottlenecks and
Compromises in Network Quality are now On the Critical Path
 GLORIAD is A Key Project to Achieve These Goals
 Synergies Between the Data-Intensive Missions of HENP & ITER
 Enhancing Partnership and Community among the US, Russia
and China: both in Science and Education

SC2004: HEP Network Layout
Preview of Future Grid systems
SLAC
Australia
Japan
Brazil
StarLight
2*10 Gbps
NLR
10 Gbps
NLR
2 Metro
10 Gbps Waves
LA-Caltech
Caltech
CACR
3*10Gbps
TeraGrid
10 Gbps
Abilene
FNAL
LA
UK
10 Gbps
LHCNet
CERN
Geneva
 Joint Caltech, CERN, SLAC, FNAL,
UKlight, HP, Cisco… Demo
 6 to 8 10 Gbps waves to HEP
setup on the show floor
 Bandwidth challenge: aggregate
throughput goal
of 40 to 60 Gbps
SCIC in 2003-2004
http://cern.ch/icfa-scic
 Strong Focus on the Digital Divide Continues
 A Striking Picture Continues to Emerge: Remarkable
Progress in Some Regions, and a Deepening Digital
Divide Among Nations
 Intensive Work in the Field: > 60 Meetings and Workshops:
 E.g., Internet2, TERENA, AMPATH, APAN, CHEP2003, SC2003,
Trieste, Telecom World 2003, SC2003, WSIS/RSIS, GLORIAD
Launch, Digital Divide and HEPGrid Workshop (Feb. 16-20 in
Rio), GNEW2004, GridNets2004, NASA ONT Workshop, … etc.
 3rd Int’l Grid Workshop in Daegu (August 26-28, 2004); Plan for
2nd ICFA Digital Divide and Grid Workshop in Daegu (May 2005)
 HENP increasingly visible to governments; heads of state:
 Through Network advances (records), Grid developments,
Work on the Digital Divide and issues of Global Collaboration
 Also through the World Summit on the Information Society
Process. Next Step is WSIS II in TUNIS November 2005
Coverage
 Now monitoring 650 sites in 115 countries
 In last 9 months added:
 Several sites in Russia (thanks GLORIAD)
 Many hosts in Africa (5  36 now; in 27 out of 54 countries)
 Monitoring sites in Pakistan and Brazil (Sao Paolo and Rio)
 Working to install monitoring host in Bangalore, India
Monitoring site
Remote site
Latin America: CLARA Network
(2004-2006 EU Project)
 Significant contribution from
European Comission and Dante
through ALICE project
 NRENs in 18 LA countries forming
a regional network for
collaboration traffic
 Initial backbone ring bandwidth f
155 Mbps
 Spur links at 10 to 45 Mbps
(Cuba at 4 Mbps by satellite)
 Initial connection to Europe at 622
Mbps from Brazil
 Tijuana (Mexico) PoP soon to be
connected to US through dark fibre
link (CUDI-CENIC)
 access to US, Canada and Asia
- Pacific Rim
NSF IRNC 2004: Two Proposals to
Connect CLARA to the US (and Europe)
1st Proposal:
FIU and CENIC
To West
Coast
2nd Proposal:
Indiana and Internet2
To East
to West Coast
Coast
to East
Coast
Note: CHEPREO (FIU, UF, FSU Caltech, UERJ, USP, RNP)
622 Mbps Sao Paolo – Miami Link Started in August
GIGA Project: Experimental Gbps
Network: Sites in Rio and Sao Paolo
Universities
IME
PUC-Rio
UERJ
UFF
UFRJ
Unesp
Unicamp
USP
R&D Centres
CBPF
- physics
CPqD
- telecom
CPTEC - meteorology
CTA
- aerospace
Fiocruz - health
IMPA
- mathematics
INPE
- space sciences
LNCC
- HPC
LNLS
- physics
Slide from M. Stanton
About 600 km extension - not to scale
LNCC
CTA
INPE
CPqD
LNLS
Unicam
p
CPTEC
Fapesp
telcos
Unesp
USP – Incor
USP - C.Univ.
CBPF
LNCC
Fiocruz
IME
IMPARNP
PUC-Rio
telcos
UERJ
UFRJ
UFF
Trans-Eurasia Information Network
TEIN (2004-2007)
 Circuit between KOREN(Korea) and RENATER(France).
 AP Beneficiaries:
China, Indonesia, Malaysia, Philippines, Thailand, Vietnam
(Non-beneficiaries: Brunei, Japan, Korea, Singapore
 EU partners: NRENs of France, Netherlands, UK
 The scope expanded to South-East Asia and China recently.
 Upgraded to 34 Mbps in 11/2003. Upgrade to 155Mbps planned
 12M Euro EU Funds
 Coordinating Partner
DANTE
 Direct EU-AP Link;
Other Links go
Across the US
APAN China Consortium
Has been established in 1999. The China Education and Research
Network (CERNET), the Natural Science Foundation of China
Network (NSFCNET) and the China Science and Technology
Network (CSTNET) are the main three advanced networks.
CERNet
NSFCnet
2.5 Gbps
Tsinghua --- Tsinghua University
PKU --- Peking University
NSFC --- Natural Science Foundation of China
CAS --- China Academy of Sciences
BUPT --- Beijing Univ. of Posts and Telecom.
BUAA --- Beijing Univ. of Aero- and Astronautics
GLORIAD: Global Optical Ring
(US-Ru-Cn; Korea Now Full Partner )
DOE: ITER Distributed Ops.;
Fusion-HEP Cooperation
NSF: Collaboration of Three
Major R&E Communities
Also Important for
Intra-Russia Connectivity;
Education and Outreach
Aug. 8 2004: P.K. Young,
Korean IST Advisor to
President Announces
 Korea Joining GLORIAD
 TEIN gradually to 10G,
connected to GLORIAD
 Asia Pacific Info. InfraStructure (1G) will be
backup net to GLORIAD
GLORIAD and HENP Example:
Network Needs of IHEP Beijing
 ICFA SCIC Report: Appendix 18, on Network Needs
for HEP in China (See http://cern.ch/icfa-scic)
 “IHEP is working with the Computer Network Information Center
(CNIC) and other universities and institutes to build Grid
applications for the experiments. The computing resources and
storage management systems are being built or upgraded in the
Institute. IHEP has a 100 Mbps link to CNIC, so it is quite easy to
connect to GLORIAD and the link could be upgraded as needed.”
 Prospective Network Needs for IHEP Bejing
Experiment
LHC/LCG
BES
YICRO
AMS
Others
Total (Sharing)
Year 2004-2005
622Mbps
100Mbps
100Mbps
100Mbps
100Mbps
1Gbps
Year 2006 and on
2.5Gbps
155Mbps
100Mbps
100Mbps
100Mbps
2.5Gbps
Role of Sciences in Information
Society. Palexpo, Geneva 12/2003
 Demos at the CERN/Caltech
RSIS Online Stand
 Advanced network and




Grid-enabled analysis
Monitoring very large scale Grid
farms with MonALISA
World Scale multisite multi-protocol
videoconference with VRVS
(Europe-US-Asia-South America)
Distance diagnosis and surgery using
Robots with “haptic” feedback
(Geneva-Canada)
Music Grid: live performances with
bands at St. John’s, Canada and the
Music Conservatory of Geneva on
stage
VRVS 37k hosts
106 Countries
2-3X Growth/Year
Achieving throughput
 User can’t achieve throughput available (Wizard gap)
 TCP Stack, End-System and/or Local, Regional,
Nat’l Network Issues
 Big step just to know what is achievable
(e.g. 7.5 Gbps over 16 kkm Caltech-CERN)