Download 20040720-Carlton-Hong

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Distributed firewall wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Net bias wikipedia , lookup

Net neutrality law wikipedia , lookup

IEEE 1355 wikipedia , lookup

Piggybacking (Internet access) wikipedia , lookup

Network tap wikipedia , lookup

Airborne Networking wikipedia , lookup

Passive optical network wikipedia , lookup

Transcript
The extension of optical
networks into the
campus
Wade Hong
Office of the Dean of Science
Carleton University
Outline
Motivation
CA*net 4 IGT
From the Carleton U Perspective
Issues
Lessons learned
Motivation
Large scale distributed scientific experiments (LHC - ATLAS,
SNOLab, Polaris, NEES Grid ... )
Access to regional distributed HPC resources (HPCVL,
SharcNet, WestGrid, TRIUMF Tier 1.5, ...)
Federating growing research-based computing resources on
campus
Allowing the end users to access these resources in an
unencumbered way
CA*net 4 customer empowered networking last mile
CA*net 4 IGT
CANARIE funded directed research project
build a testbed to experiment with customer empowered
networking, pt2pt optical networks, network performance,
long haul 10 GbE, UCLP, last mile issues, etc.
participants from the HEP community from across Canada,
the provincial ORANs, CERN, StarLight, SURFnet, and
potentially others
setup end to end GbE and 10 GbE lightpaths between
institutions in Canada and CERN
CA*net 4 Network
CA*net 4 IGT Sites
CA*net 4 IGT
interoperability testing with 10 GbE WAN PHY and OC-192
used IXIA traffic generators to characterize the trams-atlantic
link
transferred real experimental data from ATLAS FCAL beam tests
(GbE and 10 GbE)
demonstrated native end-to-end 10 GbE between CERN and
Ottawa for the ITU Telecom World 2003
Planned CA*net 4 IGT Activities
complete the last mile connectivity for most of the participating
Canadian sites
third OC-192 across Canada being brought up using Nortel
OME 6500s
continuing long haul native 10 GbE experiments (Foundry
MG8s)
TRIUMF to CERN, TRIUMF to Carleton, Carleton to CERN
CERN to Tokyo via Canada
HEPix Robust Transfer Challenge - sustained disk to disk
transfers between TRIUMF and CERN
Planned CA*net 4 IGT Activities
Real-time remote farms for ATLAS
CERN to U of Alberta
Data transfer of End Cap Calorimeter data from the
combined beam tests to several Canadian sites
one beam test just completed (~1TB)
second test to start late August (significantly more data)
Transfer of CDF MC data from the Big Mac Cluster
establish a GbE lightpath between UofT and FermiLab
Planned CA*net 4 IGT Activities
Experimentation with bulk data transfer
investigating RDMA/IP (sourcing NICs)
establish GbE lightpaths between Canadian sites
Carleton University
located in Ottawa, the nation’s capital
at the southern end of the world’s longest outdoor
skating rink
Canada’s Capital University
student population of 22,000 students, 1700 faculty and staff
over $100M in research funding in the past year
CFI contribution significant
about half to Physics
Bill St. Arnaud’s alma mater
Carleton University
External Network Connectivity
commodity Internet
Telecom Ottawa - was the largest metro 10 GbE deployment
R&E traffic
finally connected to ORION (Dec 2003), the new ORAN, just
prior to the decommissioning of ONET
EduNet
non profit, OCRI managed dial-up and High Speed Internet
for higher education institutions in Ottawa
dial-up ISP has a dedicated link back to campus
Carleton U Network Upgrade
campus has been in the process of planning a campus
network upgrade for the past 3 to 4 years
several false starts
application to funding agencies based on requirements of
research activities
may have missed the window of opportunity
finally proceeding with the network upgrade
RFPs currently being evaluated
Network Upgrade Proposal
original proposal
phase one (Year 1)
build the campus core network
phase two (Year 2)
build the distribution layer
phase three (Year 3)
rewire the buildings for access
not my preferred ordering!
Proposed Topology
Differing Viewpoints
debate over how to handle high capacity research traffic
flows
necessity of routing traffic through the proposed high
capacity campus core
on the other hand optical bypasses would simplify and
reduce the complexity and cost of the campus network
4 fibre pairs between Herzberg Laboratories and
Robertson Hall cost about $4K CDN - we prevailed
reality check
current campus network cannot handle the high volume
and high speed flows
Motivations Revisited
Large scale distributed scientific experiments
Motivations Revisited
Access to regional distributed HPC resources
other HPCVL sites (Queens, UofO, RMC, Ryerson U)
TRIUMF ATLAS Canada computing centre
SNOLab
shared ORION and CA*net 4 connectivity is only at GbE
high capacity flows probably dictate pt2pt optical bypass
interconnectivity can be static or dynamic
fully statically meshed or scheduled dynamic connectivity on
demand - probably the latter
Motivations Revisited
Federating growing research-based computing resources into a
campus grid
HPCVL Linux cluster upgrade (128+256 CPUs)
Physics research cluster upgrade (40+96+96 CPUs)
Civil Engineering (~128 CPUs)
Architecture/Psychology visualization cluster (>128 CPUs)
Systems and Computer Engineering ( 64 CPUs)
debating a condominium or distributed model
most likely a hybrid with optical fibre as the interconnecting fabric
probably static pt2pt optical bypass for ease of use and user
control
Motivations Revisited
federated the Physics research computing cluster with part of the
HPCVL Linux cluster last summer for about 2 months
clusters located on different floors
pt2pt link established - much easier than routing through the
campus network
completed half of the MC regeneration for the third SNO
paper
similar arrangement this summer to add part of the HPCVL
cluster to the Carleton U Physics contribution to the LHC
Computing Grid till the end of the year
Issues
control
central management and control vs end user empowerment
disruptive
network complexity
using pt2pt ethernet links for high capacity flows should
simplify campus networks (reduce costs?)
security
disruptive - bypassing DMZ
for the uses considered here, the pt2pt links are inherently
secure - non routed private subnets
Issues
why not copper?
it could be but with fibre
greater distances
requires less active devices along the path
management and control - device at each end
under the control of the end users is ideal
consistent device characteristics - jumbo frames,
port speed, duplex, etc.
inter-building connectivity is fibre and planned
vertical cabling will be fibre
Issues
last mile connectivity
demarcation point
end user device (NIC) or an edge device (switch, CWDM
mux)
location of the demarc
at the end user or a common shared location
technology used to extend the end to end lightpath into the
campus
pt2pt GbE
optical GbE NIC - patched thru to GbE interface on ONS
media converter - copper to optical
Issues
pt2pt 10GbE
LAN PHY to WAN PHY conversion to OC192c on ONS
15454/OME 6500
wavelength conversion
CWDM
media converters - copper to colored wavelength
colored GBICS for GbE switch
optical link charateristics
padding (attenuation), proper power budget, etc.
end user shouldn’t need to be an optical networking expert
Lessons Learned
good to be rich in fibre
provides greater flexibility
support of ORANs, national R&E network, and international
partners is essential - all have been very supportive
need to convince local campus networking folks that this is not
really too disruptive
will simplify and not burden the campus production network
need a more coherent way of dealing with optical access in the
last mile
still lots to learn!
Thank You!
Wade Hong
[email protected]