Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
"The OptIPuter: an IP Over Lambda Testbed" Invited Talk NREN Workshop VII: Optical Network Testbeds (ONT) NASA Ames Research Center Mountain View, CA August 9, 2004 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technologies Harry E. Gruber Professor, Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD The OptIPuter Project – Removing Bandwidth as an Obstacle In Data Intensive Sciences • NSF Large Information Technology Research Proposal – Cal-(IT)2 and UIC Lead Campuses—Larry Smarr PI – USC, SDSU, NW, Texas A&M, Univ. Amsterdam Partnering Campuses • Industrial Partners – IBM, Sun, Telcordia, Chiaro Networks, Calient, Glimmerglass, Lucent • $13.5 Million Over Five Years • Optical IP Streams From Lab Clusters to Large Data Objects NIH Biomedical Informatics Research Network NSF EarthScope and ORION http://ncmir.ucsd.edu/gallery.html siovizcenter.ucsd.edu/library/gallery/shoot1/index.shtml What is the OptIPuter? • Applications Drivers Interactive Large Data Objects • OptIPuter Nodes Scalable PC Cluster LambdaGrid “Browser” • IP over Lambda Connectivity Predictable Backplane • Open Source LambdaGrid Middleware Network is Reservable • Data Retrieval and Mining Global Virtual Disk Drives • High Defn. Visualization, Collaboration “Ultra-Reality TV” www.optiputer.net See Nov 2003 Communications of the ACM for Articles on OptIPuter Technologies UCSD Packet Test Bed OptIPuter Year 2 OptIPuter UCSD / San Diego Network and Nodes -- 2004 SDSC HP 28-node cluster (shared) CSE 8-node cluster (shared) Infiniband 64 nodes Infiniband 4 nodes Sun Sun 24-32-node 128-node compute compute cluster cluster IBM 48-node storage cluster JSOE Preuss Sun 24-32-node compute cluster HP 4-node control Dell Viz 10 Dell 5224 Chiaro Extreme 400 Enstara 4 2 1 Extreme 400 9-node cluster 7-node cluster (shared) (shared) To UCI and ISI via CalREN-HPR Juniper T320 10 Extreme 400 Dell 5224 CRCA 1 1 Bonded GigE 1 Geowall 2 Tiled Display IBM 9 mpixel display pairs 1 3-node Viz cluster Cisco 6509 4 Dell Geowall 4 Dell 5224 Dell 5224 SIO Dell 5224 IBM 128-node CPU cluster IBM 9-node Viz cluster Sun Sun 24-32-node 5-node cluster cluster IBM 9 mpixel display pairs SOM Sun 22-node Viz cluster 6th College Dell 5224 UCSD Quartzite Core at Completion (Year 5 of OptIPuter) Quartzite Communications Core Year 3 To 10GigE cluster node interfaces ..... Quartzite Core Wavelength Selective Switch • Recommended for Funding • Physical HW to Enable Optiputer and Other Campus Networking Research • Hybrid Network Instrument To 10GigE cluster node interfaces and other switches To cluster nodes ..... To cluster nodes ..... GigE Switch with Dual 10GigE Upliks 32 10GigE Production OOO Switch To cluster nodes ..... To other nodes GigE Switch with Dual 10GigE Upliks Currently Under Review by NSF ... GigE Switch with Dual 10GigE Upliks Chiaro Enstara GigE 10GigE 4 GigE 4 pair fiber Juniper T320 CalREN-HPR Research Cloud Campus Research Cloud Edge and Core OptIPuter Nodes: Using Parallel Lambdas to Prototype a Bandwidth-Rich World UIC/EVL 16x1 GE @ NU Nortel, SBC Int’l GE, 10GE Partnership With UIC &NU OMNInet 10GEs 128x128 Calient 64x64 GG I-WIRE OC-192 16-dual Xeon Cluster 16x1GE Nat’l GE, 10GE Original Plan -- More Now in Place 16x10 GE? 10GE OptIPuter CAVEWAVE on the National LambdaRail EVL Next Step: Coupling NASA Centers to NSF OptIPuter Source: Tom DeFanti, OptIPuter co-PI OptIPuter Software Architecture from Grid to LambdaGrid OptIPuter Applications DVC/ Middleware Visualization DVC #1 Higher Level Grid Services DVC #2 Security Models DVC #3 Data Services: Real-Time Layer 5: SABUL, RBUDP, DWTPHigh-Speed Objects Fast, GTP Transport Grid and Web Middleware – (Globus/OGSA/WebServices/J2EE) Layer 4: XCP Optical Signaling/Mgmtl-configuration, Net Management Node Operating Systems Physical Resources DVC=Distributed Virtual Computer Source: Andrew Chien, UCSD OptIPuter Software Systems Architect Ultra-Resolution OptIPuter Displays Utilizing Photonic Multicasting --Scaling to 100 Million Pixels UIC-EVL Source: Jason Leigh, OptIPuter co-PI Glimmerglass Switch Used to Multicast and Direct TeraVision Stream from One Tile to Another on the Geowall-2 Driven by Linux Graphics Clusters Glimmerglass Switch 30 Megapixel High-Resolution Visualizations = ~1 Gigapixel at 30fps = ~30Gb/s Bandwidth The OptIPuter Will be Used to Enhance Collaboration OptIPuter Will Connect Falko Kuester’s Cal-(IT)2@UCI Smart Classroom and The 50M-Pixel Display At UCSD Ellisman’s BIRN Laboratories OptIPuter Organization Chart Larry Smarr Principal Investigator Frontier Advisory Board (FAB) Network and Hardware Infrastructure Maxine Brown Project Manager Software Architecture Data, Visualization and Collaboration Applications and Outreach Andrew Chien Jason Leigh Mark Ellisman, co-PI John Orcutt Debi Kilb/Tom Moher Tom DeFanti and Phil Papadopoulos Co-PI Co-PIs Campus Testbeds Middleware/DVC Data SoCal Testbed (network, clusters, storage) Real-Time OS Data Storage Visualization/ Collaboration Security Volume Visualization Application Performance Modeling High-Performance Visualization and Data Analysis Transport Protocols Photonic Multicast Chicago/ Int’l. Testbed (network, visualization clusters and displays) National Testbed Optical Signaling Protocols LambdaRAM Bioscience (NCMIR/BIRN) Geoscience (EarthScope, Oceanographic) Education & Outreach Future OptIPuter Driver: Gigabit Fibers on the Ocean Floor ORION-Ocean Research Interactive Ocean Network www.neptune.washington.edu Currently Under Review by NSF Cyberinfrastructure in Design Phase-Fiber Optic Cables Satellite Wireless