* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download What is the Access Grid?
Remote Desktop Services wikipedia , lookup
Distributed operating system wikipedia , lookup
Computer network wikipedia , lookup
Deep packet inspection wikipedia , lookup
Distributed firewall wikipedia , lookup
Video on demand wikipedia , lookup
TV Everywhere wikipedia , lookup
Wireless security wikipedia , lookup
Airborne Networking wikipedia , lookup
Cracking of wireless networks wikipedia , lookup
Recursive InterNetwork Architecture (RINA) wikipedia , lookup
Piggybacking (Internet access) wikipedia , lookup
An Introduction of Access Grid 2003/10/23 張權力 SRO-NCHC [email protected] Plan of Presentation Basic ideas of Access Grid (30 mins) IP Multicast and Access Grid (30 mins) Varieties of Access Grid (10 mins) NCHC and Access Grid (10 mins) Possible Demo 3 Basic ideas of Access Grid Concepts of AG AG 1.x -basic functionality -hardware/ software overview -deployment example of AG 1.x -PIG AG 2.x -AG 2.x overview -AG 2.x security Differences of AG 1.x and 2.x 4 IP Multicast and AG A brief review of IP Multicast - what is multicast? Overview of IP multicast protocol IP multicast addressing ASM and SSM AG networking issues -importance of Beacon -how to maintain relationship with network people -what to do when network fail -other networking issues -IPv6 -QoS -SSM 5 Varieties of AG Open Mash VRVS(Virtual Rooms Videoconferencing System) Berkeley Internet Webcasting System (BIBS) ACE(Advanced Collaborative Environments ) Access Grid Augmented Virtual Environments (AGAVE) ,Tele-Immersion - CAVE, IDesks, Tiled Displays, auto stereoscopic displays Community Grid Computing Labs,Indiana University,Web Services and Peer-to-Peer Technologies for the Grid AG-ANL…… inSORS – commercial AG 6 NCHC and AG Summary of NCHC’s AG activities Deployment status of AG in Taiwan Other activities 7 8 – 由美國美國阿岡國家實驗室ANL (Argonne National Laboratory) 所創的一種新式視訊會議系統,可提供3至20人的共同與會 AG建置數量 – – – – – 10 Sites in FY99 34 Sites in FY00 69 Sites in FY01 136 Sites in FY02 Over 150 Sites in FY03 9 Concepts of AG 10 Access Grid Concepts (I) Shared PowerPoint Large-format displays Multiple audio and video streams Supporting distributed meetings 11 Access Grid Concepts (II) Presenter mic Presenter camera Ambient mic (tabletop) Audience camera Presenter camera Presenter mic Ambient mic (tabletop) Audience camera Spaces at ANL Library Workshop ActiveMural Room DSL 12 Access Grid Concepts (III) Feedback view surface Ambient mic (ceiling mount) Presenter mic Audience mic Presenter camera Audience camera Technician headset 13 Some Definitions of AG 1.X The Access Grid: The infrastructure and software technologies enabling linking together distributed Active(Work)Spaces to support highly distributed collaborations in science, engineering and education, integrated with and providing seamless access to the resources of the National Technology Grid. Access Grid Node: The ensemble of systems and services managed and scheduled as a coherent unit (i.e. basic component of a virtual venue). Access Grid Site: A physical site (admin domain, networking POP, etc.) that supports one or more Access Grid Nodes. Access Grid Sites need to be Grid services enabled (authentication, QoS, security, resource management, etc.) 14 Some Definitions of AG 1.X (cont’) The Access Grid is an Internet-based model for video conferencing AG 1.x developed by the Future Lab(FL)within the Mathematics and Computer Science (MCS) division of Argonne National Laboratories(ANL) 15 Some Definitions of AG 2.X The Access Grid project’s focus is to enable groups of people to interact with Grid resources and to use the Grid technology to support group to group collaboration at a distance. Supporting distributed research collaborations Distributed Lectures and seminars Remote participation in design and development Virtual site visits and team meetings Complex distributed grid based applications Long term collaborative workflows 16 Stages of Collaborative Work Awareness Increasing need for Interaction persistent collaborative infrastructure Cooperation Collaboration Virtual Organization Can the concept of Persistent Shared Spaces enable the costeffective support of virtual organizations. 17 Physical Spaces to Support Group work Overall room layout large enough to support groups and workplace tools configured so that both local and remote interactions work Lighting and camera geometry studio type environment with specified placement, levels well tested and calibrated for good image quality Audio geometry multiple microphones and speakers tested to provide good coverage designed to support audio clarity and some spatialization 18 Virtual Collaboration Spaces Structure and organization supports intended use activity dependent • secure channels for “private sessions” • broadcast channels for public meetings Supports multiple interaction types (modalities) text, audio, video, graphics, animation, VR Can exploit strong spatial metaphor interaction scoping resource organization navigation and discovery Needed to escape the tyranny of the desktop 19 AG 1.X 20 A Semi-Designed Space 21 Enhanced Space for Distributed Presentations 22 AG 1.x system components 23 AG 1.x Basic functionality An access Grid “node” is a conference room or small auditorium provisioned with the equipment to participate in a multipoint video conference. . Audio . Video . Whiteboard . Screen sharing . Application AG 1.x Functionality The basic functionality provided within the node is: Audio: encoding using one or more microphones (via a mixer) Video: encoding or “capture” using one or more cameras Audio presentation using one or more speakers Video display via one or more computer monitors and/or video projection techniques . Screen sharing/whiteboard via VNC. AG 1.x Hardware Summary 4 部PC : Display電腦 (Win2000平台,其中使用雙視窗顯示卡 2張 ) Video capture電腦 (Linux平台,其中使用 4張 capture卡 ) Audio capture電腦 (Linux平台,其中使用 4張 音效卡) Control電腦 ( Win98平台以上 ) 4 組 攝影機 多個麥克風 Echo canceller 裝置 3 部 投影機 26 AG 1.x Software components • The Access Grid model revolves around two pieces of software: • vic: the video conferencing tool, and • rat: the robust audio tool. • and involves several other applications • Distributed PowerPoint • a MUD • the Multicast Beacon • Virtual Venue • Virtual Network Computing 27 AG 1.x Software components Cont’ Why? Introducing AG 1.x by learning its vocabularies 28 AG 1.x Software components -History class vic and rat were developed as part of the Internet Multicast backbone, or MBONE, which provided multicast services over the unicast Internet backbone (using "tunnels", or "bridges", between multicast nexus sites). The Access Grid model relies upon the ability to send and receive Internet Multicast traffic to and from all conference nodes. An individual vic stream will generate from 10Kbps to 4Mbps of network traffic. A large conference may generate 20Mbps. 29 AG 1.x Vs Video Conference(vic) Vic was developed by Steve McCanne and Van Jacobson at the Lawrence Berkeley Labs. It is intended to link multiple sites with multiple simultaneous video streams over a multicast infrastructure. 30 AG 1.x Robust Audio Tool(Rat) rat is a recent version of the Visual Audio Tool, also developed by Steve McCanne and Van Jacobson at the Lawrence Berkely Labs. rat allows multiple users to engage in a audio conference over the Internet in multicast mode. 31 AG 1.x - The MUD software Operators at each site involved in an Access Grid conference typically keep in touch by using software originally developed for online "role-playing" games generically called Multi-User dragons and Dungeons" games, or "MUDs". (MUD functionality is similar to that of Inter net Relay Chat operating with access control.) Argonne runs a MUD server for use by Access Grid operators who run MUD clients on their desktop systems. tkMOO-lite is currently the recommended MUD client for this purpose, but others, such as Tiny-Fugue in the Unix environment can be used as well. 32 The Multicast Beacon To help diagnose multicast network problems during conferences, Argonne promotes the use of the NLANR multicast "Beacon" monitoring system, which includes three pieces of software: 1. a Beacon to be run at each AG node, 2. a server to collect transmission statistics from a collection of Beacons, and 3. a Beacon web server that displays data collected by the server. 33 AG 1.x Virtual Venue software -server Coordinating multiple group conferences can be complicated. Argonne has developed a collection of web pages and Java applications that can simplify the process. The Virtual Venue is basically a web-page that lets users select a "conference" to attend. In this context a "conference" is composed of • a vic multicast address, • a rat multicast address, and • a MUD identifier. 34 AG 1.x - Virtual Venue software - client If your systems are Virtual Venue-enabled, the display system operator can click on a conference room name and the vic, rat and MUD applications running on the video display, video capture and audio processing systems will all be started with target addresses and settings appropriate to the selected conference room. This coordination is accomplished by running an "event server" and the event controller on the display system, along with "event listeners" on the video capture and audio processing systems. 35 AG 1.x- VNC vs DPPT • VNC allows users to share monitor screens over the Internet in a variety of modes. In the Access Grid environment, VNC allows a speaker to share his/her podium laptop with Access Grid display systems which can then project it at remote nodes. This is useful when a speaker wishes to give real-time demonstrations or present PowerPoint slides that include "fancy" features, such as animations, that cannot be displayed using Distributed PowerPoint. • VNC employs a client server architecture, and there are clients and servers available for Windows98/NT/2000 and Unix operating systems. 36 References www.cc.ku.edu/acs/docs/access-gridnode/access-grid-ku.html http://www.osc.edu/accessgriddoc/room/roo mconfig.html http://mrccs.man.ac.uk/global_supercomputi ng/hardware.html 37 Argonne AG Web Pages http://www-fp.mcs.anl.gov/fl/Accessgrid/ 38 AG 1.x deployment example 39 AG 1.x cost components 硬體需求, 1.四台電腦 (20~25) 萬元 2.週邊視聽設備 (45~55)萬元 3.DLP or LCD 投影機三台 (15 ~ 20)萬元 費用估算總共約 (80 ~100)萬元 軟體需求 全部皆可由網路下載,完全免費 軟硬體安裝,約需二週 網路需求: 頻寬需求大,至少10Mbps,具群體廣播(multicast)功能 40 AG 1.x - PIG PIG = Personal Interface to access Grid ANL’s last release of AG 1.x in 2000 Why PIG? too expensive too complex for setup an AG 1.x node Runs in desktop, labtop, easy to install and more portable than a 4-PCs setup Less stringent hard/ software requirement Why this late? Maybe ANL want to differentiate AG with other video conferencing technology. They like multiple display of AG….just my two cents 41 AG 2.x 42 Ad Hoc collaboration using AG 43 The Grid Software Stack with AG 44 Interactive Scientific Computing using AG 45 Hopefully, everybody use AG at work! Passive stereo display Access Grid Digital white board Tiled display 46 Some AG Active Research Issues Scalable wide area communication Evolution of multicast related techniques, and time shifting issues Scoping of resources and persistence Value of spatial metaphors, security models Virtual Venues, synchronous and asynchronous models Improving sense of presence and point of view Wide Field Video, Tiled Video, High-resolution video codecs Network monitoring and bandwidth management Beacons and network flow engine Role of Back-channel communications Text channels and private audio Recording and playback of multi-stream media 47 AG 2.x Architecture Venue Server Management Node Management Client Venue7 Venue3 Venue5 Venue8 Venue Server Network Services Node Service Venue Client Lobby Venue1 Access Grid Node Network Services Network Services Service Manager Service Manager Service Manager Service Service Service 48 AG vs. Commercial Desktop Tools AG targets beyond the desktop large format multi-screen for AG Global Channels room scale hands free full-duplex audio AG uses dedicated hardware multiple machines, separation of function NT, Linux AG software is Open Source extends and builds on community tools AG environment is integrated with Grid services extensible framework designed to “plug-into” the Grid AG development is a Community Effort you are welcome to join in the fun!! 49 AG 2.x - What is the Access Grid? Virtual Venues Places where users collaborate Network Services Advanced Middleware Virtual Venues Client User Software Nodes Shared Nodes • Administratively scoped set of resources Resources Provide capabilities Users collaborate by sharing: •Data •Applications •Resources Personal Nodes • User scoped set of Resources 50 AG 2.x Virtual Venues What is a Virtual Venue? A Virtual Venue is a virtual space for people to collaborate What do Virtual Venues provide? Entry/Exit Authorization Information Connections to other Venues Coherence among Users • Venue Environment, Users, Data Client Capabilities Negotiation • List of Available Network Services • Keep track of resulting Stream Configurations Applications Virtual Venues have two interfaces Administrative – Venue Management Software Client – Virtual Venue Client Software 51 AG 2.x Venue Server The Venue Server houses multiple Virtual Venues The creation, modification and destruction of venues is done through the venue management tool. 52 AG 2.x Venue Client Displays venue content Enables navigation Integrates with Node Service 53 AG 2.x Venue Client On startup, the software may prompt for required information 54 AG 2.x Shared Nodes Node Service An AG Node consists of one or more machines The single central NodeService communicates with a ServiceManager on each machine Service Manager Service Manager Service Manager 55 AG 2.x Nodes cont’d Node Services Expose resources on the machines in the node Implement a specific network interface Provide capabilities to the node • Video, h261, 25fps • Audio, 16kHz Node Service Node Service Service Manager Service Manager Service Manager Audio Service Video Consumer Service Video Producer Service Service Manager Audio Service Video Consumer Service Video Producer Service 56 Nodes cont’d Users install available services to establish the capabilities of the Node Adding services extends the collaborative capabilities of the Node Services are simple to develop and integrate, facilitating third-party development Node Service Service Manager Service Manager Service Manager Audio Service Video Consumer Service Video Producer Service 57 AG 2.x platform and Software Requirements Supported Platforms: Windows XP Red Hat Linux 7.3 Required Software Python 2.2 – http://www.python.org/ • Windows – ActiveState python works well http://www.activestate.com/ (and includes the win32 extensions) • Linux – included with Red Hat 7.3 wxPython 2.4 – http://www.wxpython.org/ • Windows – from the project web site • Linux – included in Red Hat 7.3 58 AG 2.x Toolkit Overview The Toolkit contains: The Venue Server (and a Debug Version) The Venue Client (and a Debug Version) The Venue Server Management Client Configuration Tools: • Windows Globus Config Tool • Networking Config Tool • Certificate Request Tool • Node Setup Wizard • Video Setup Tool Manuals for the Clients Developers Documentation Additionally, separate components are available for: Shared Presentation Application (Windows Only) Shared Browser Application (Windows Only) Bridge Service (Linux Only) 59 AG 2.x and Globus Globus become a requirement in AG 2.x In order to run AG 2.x, a certificate needs to be requested It is for each person, not for each node Uniquely identifies a user (O=Access Grid/ OU=agdev-ca.mcs.anl.gov/ OU=apan.net/ CN=ag-training) • O= Associates certificate with an Organization • OU=Associates certificate with an Organizational Unit • OU: agdev-ca.mcs.anl.gov is the signing CA’s address • OU: apan.net is your domain • CN= is the unique name of the user 60 AG 2.x security 61 What does security mean? No one can hear our budget discussion I can tell exactly who is hearing our budget discussion I won’t get fired if I use AG 2.x I won’t get fired, and I can blame ag [email protected] if someone breaks in while I’m using AG 2.x I can put my AG Node behind the department firewall and everything will be cool Everything is encrypted and password protected The script kiddies won’t get me 62 AG Threat Model A threat model describes… … attackers’ resources … what attacks are expected to be mounted … what attacks we aren’t going to worry about Assumptions End systems not compromised • How to ensure this? (classical system intrusion detection mechanisms) Attacker has complete control over communications channel • Steal packets • Generate packets • Forge packets 63 Passive attacks Attacker reads packets from the network Packet sniffing from shared LAN (Ethernet, wireless) Goal: Obtain private information Credentials Passwords Confidential data Offline cryptographic attacks Multicast Anyone can listen in! 64 Active attacks Spoofing attacks Denial of service Replay attacks Message insertion / deletion / modification Man in the middle Goals: Disruption of service Hijacking of data channels For fun and profit 65 Social Engineering Attacks Strong encryption isn’t all there is “Hi,I lost my password, can you reset it and tell me what it is?” Security of passwords and private keys 66 What to protect in AG? Media streams Shared documents Audio Video “Presentations” PowerPoint HTML slide shows MPEG movies Visualizations Node hardware System software Control streams Data Nuke simulations Design documents Models Shared applications 67 From whom? Inadvertent lurkers the abandoned node problem Over-interested third parties Everyone but those invited to private meetings Hackers intent on destroying our data Competitors 68 How do we do this? Identify users and nodes Authenticate them with an authority we trust Authorize their access to the resources they request Provide them privacy and secure access to their applications and data 69 Classic components of computer security In the abstract: Identification of users and services Authentication of the identity of these users and services Authorization for access to resources Confidentiality of data (files, streams, control, etc.) (Non-repudiation) More concretely… 70 Identification Each user identified What’s in a name? Each server or service identified Similar to mechanism used by SSL-secured websites (do you check their certificates?) Basis for authorization 71 Authentication Mechanism by which an assertion of identity is verified In the AG, authentication performed each time a client/server transaction occurs (Provided by underlying toolkit) 72 Authorization Determination if the authenticated identity of the requestor is allowed access to a resource AG toolkit defines role-based authorization mechanisms Access control to venues Administrative access to venues, venue servers Open to third-party application code as well 73 Confidentiality Privacy of control connections (SSL) Privacy of media streams (media tools + AES/Rijndael) Privacy of venue data, other venue app interactions (SSL) 74 Stream Security Current vic / rat support AES/Rijndael encryption Key distribution via venues services mechanisms Per RFC1889 Vague worries… Are keys recoverable (in face of many gigabytes of encrypted data) Rekeying intervals? IETF Secure RTP draft (draft-ietf-avt-srtp-02) 75 How are these accomplished? Key supporting technology: Public-key Infrastructure 76 Digression into Cryptography Symmetric or shared-key encryption The same key is used for encryption and decryption High-speed ciphers available Key distribution is a problem 77 Cryptography, cont. Public-key cryptography Different keys used for encryption and decryption Public key is published Private key is held private Used to implement message authentication, digital signatures Much much slower than symmetric encryption Typically used in conjunction with X.509 certs for authentication, key exchange 78 SSL Secure Sockets Layer Key protocol in the AG, uses PKI certificates to authenticate clients and servers for connectionoriented communications Public-key crypto used to bootstrap a shared-key cipher for high-speed data communication AG uses SSL via the Globus Toolkit 79 How are these accomplished? Key supporting technology: Public-key Infrastructure Identity asserted by X.509 Identity Certificate Contains a subject name and a public key Also contains the name of the cert’s issuer Digitally signed by a Certificate Authority (the issuer) Signature asserts that the CA believes the holder of the private key has the identity asserted in the certificate SSL uses challenge/response mechanism to verify the presenter of a cert holds the private key 80 81 Concluding this section Concepts of AG AG 1.x -basic functionality -hardware/ software overview -deployment example of AG 1.x -PIG AG 2.x -AG 2.x overview -AG 2.x security Differences of AG 1.x and 2.x 82 Access Grid Nodes in Summary Access Grid 2.0 reference platforms: 1. 2. 3. 4. 5. What Hardware? Advanced Node – Tiled Display, Multiple Video Streams, Localized Audio Room Node – Shared Display, Multiple Video Streams, Single Audio Stream (AG 1.x Node) Desktop Node – Desktop Monitor, Multiple Video Streams, Single Audio Stream (AG 1.X PIG) Laptop Node – Laptop Display, Single Video Stream, Single Audio Stream Minimal Node – Compact Display, Single Video Stream, Single Audio Stream Cameras, Microphones, Speakers, Display, Input Devices Get Audio Correct! Software Requirements? Python 2.2, wxPython, GT2.0, pyGlobus 83 Summary of Changes from 1.0 to 2.0 1.0 2.0 Virtual Venues Static Media Configurations • Assumed Multicast Technology Virtual Venues Client Single Server assumption Web Browser Nodes Non-extensible single reference platform • AG 1.1 1.2 PIGs introduced Applications layered outside of AG software Virtual Venues Dynamic Media Configurations • Capability Brokering Functionality Integrated Data Storage Support for highly scalable deployments • Multicast Addressing • Topological Simplicity (connections as URLs) Virtual Venues Client Streamlined Client Integrated Grid Security Workspace Docking Application Development Interfaces Exposed Nodes Nodes defined in terms of resources Management UI Interfaces exposed for building new Services Broader set of Reference Platforms Applications Venue Hosted Collaborative Apps Network Services 84 Summary of Changes from 2.0 to 2.1 2.0 Virtual Venues Dynamic Media Configurations • Capability Brokering Functionality Integrated Data Storage Support for highly scalable deployments • Multicast Addressing • Topological Simplicity (connections as URLs) Virtual Venues Client Streamlined Client Integrated Grid Security Workspace Docking Application Development Interfaces Exposed Nodes Nodes defined in terms of resources Management UI Interfaces exposed for building new Services Broader set of Reference Platforms Applications Venue Hosted Collaborative Apps Network Services 2.1 - Perhaps Virtual Venues Re-engineered Data Storage Unicast fallback Authorization Layer Complete • Administrators Role • Allowed Entry Role Virtual Venues Client Simplified Certificate Management Nodes Node Configuration Wizard Node Service Examples Services Service Examples Applications Completed Shared Presentation Viewer Network Services AG Development CA moving to OpenCA 85 86 IP Multicast and AG A brief review of IP Multicast - what is multicast? Overview of IP multicast protocol IP multicast addressing ASM and SSM AG networking issues -importance of Beacon -how to maintain relationship with network people -what to do when network fail -other networking issues -IPv6 -QoS -SSM 87 A Brief review of Multicast Something about multicast you might want to know when you got assigned deploying and running AG 88 What is multicast? Rather than sending a separate copy of the data for each recipient, the source sends the data only once, and routers along the way to the destinations make copies as needed. 89 Multicast Analogy (I) Radio Analogy: Each multicast address is like a radio frequency, on which anyone can transmit, and to which anyone can tune-in. 90 Multicast Analogy (II) – agricultural approach Consider a farmer’s field needs water, near a pond with water in it. A farmer will dig a trench starting at the field and working towards the pond. When the trench gets to the pond, water flows back down the trench to the field. The field is the set of multicast receivers, and the pond is the multicast source. Internet multicast routing protocols “dig a trench”—that is, build forwarding state from the receiver back towards the source. Packets flow from the source to the receivers only when the forwarding state is properly created. 91 What’s Right? Mcast Enabled ISP Content Owner Unicast-Only Network Mcast Traffic Mcast Join Mcast Enabled Local Provider 92 What’s Right? Mcast Enabled ISP Content Owner Unicast-Only Network Mcast Traffic Mcast Join Mcast Enabled Local Provider 93 What’s Wrong? Mcast Enabled ISP Content Owner Unicast-Only Network Mcast Traffic ..tick ..tick ..tick Timeout! Mcast Join Mcast Enabled Local Provider 94 What’s Wrong? Mcast Enabled ISP Content Owner Unicast-Only Network To gain maximum audience size, unicast fallback streams (ie servers) are deployed. Mcast Traffic Mcast Join Session Description File defines mcast timeout, AND the backup unicast transport. Ucast Request Mcast Enabled Local Provider 95 What’s Wrong? Mcast Enabled ISP Content Owner Unicast-Only Network Mcast Traffic Mcast Join Ucast Request Ucast Stream Mcast Enabled Local Provider 96 What’s Wrong? Mcast Enabled ISP Content Owner Unicast-Only Network Mcast Traffic Mcast Join Ucast Request Ucast Stream Mcast Enabled Local Provider 97 What’s Wrong? Mcast Enabled ISP Content Owner $$$$!!! Unicast-Only Network $$$$!!! Mcast Traffic Mcast Join Ucast Request Ucast Stream Mcast Enabled Local Provider 98 IP multicast in the past The original multicast network was called the MBone. It used a simple routing protocol called DVMRP (Distance Vector Multicast Routing Protocol). As there were only isolated subnetworks that wanted to deal with DVMRP, the old MBone used tunnels to get multicast traffic between DVMRP subnetworks. i.e., the multicast traffic was hidden and sent between the subnetworks via unicast. This mechanism was simple, but required manual administration and absolutely could not scale to the entire Internet. Worse, DVMRP requires substantial routing traffic behind the scenes and this grew with the size of the MBone. Thus, the legend grew that multicast was a bandwidth hog. 99 Multicast Grows Up Starting about 1997, the building blocks for a multicast-enabled Internet were put into place. An efficient modern multicast routing protocol, Protocol Independent Multicast – Sparse Mode (PIM-SM), was deployed. The mechanisms for multicast peering were established, using an extension to BGP called Multiprotocol BGP (MBGP), and peering became routine. The service model was split into: • a many-to-many part (e.g., for videoconferencing): Any-Source Multicast (ASM), and • a one-to-many (or “broadcast”) part: Source-Specific Multicast (SSM). By 2001, these had completely replaced the old MBone. This path is not unusual for new technology... 100 Overview of IP Multicast Architecture Receivers Delivery tree Membership reports Senders Multicast Routing Protocol (e.g. PIM-SM) Group Management Protocol (e.g. IGMP) Group Management Protocol - enables hosts to dynamically join/leave multicast groups. Receivers send group membership reports to the nearest router. Multicast Routing Protocol - enables routers to build a delivery tree between the sender(s) and receivers of a multicast group. 101 IP Multicast building blocks The SENDERS send without worrying about receivers Packets are sent to a multicast address (RFC 1700) This is in the class D range (224.0.0.0 239.255.255.255) The RECEIVERS inform the routers what they want to receive done via Internet Group Management Protocol (IGMP), version 2 (RFC 2236) or later The routers make sure the STREAMS make it to the correct receiving networks. Multicast routing protocol: PIM-SM 102 Multicast Protocol Summary IGMP - Internet Group Management Protocol is used by hosts and routers to tell each other about group membership. PIM-SM - Protocol Independent Multicast-Sparse Mode is used to propagate forwarding state between routers. MBGP - Multiprotocol Border Gateway Protocol is used to exchange routing information for inter-domain RPF checking. MSDP - Multicast Source Discovery Protocol is used to exchange active source information between RPs. 103 CIDR Address Notation The multicast address block is 224.0.0.0 to 239.255.255.255 It is cumbersome to refer to address blocks in the above fashion. Address blocks are usually described using “CIDR notation” This specifies the start of a block, and the number of bits THAT ARE FIXED. • In this shorthand, the multicast address space can be described as 224.0.0.0/4 or, even more simply, as 224/4. The fixed part of the address is referred to as the prefix, and this block would be pronounced "two twenty four slash four." Note that the LARGER the number after the slash, the LONGER the prefix and the SMALLER the address block. 104 Multicast Addressing IP Multicast Group Addresses 224.0.0.0–239.255.255.255 Class “D” Address Space • High order bits of 1st Octet = “1110” TTL value ~may~ define scope and limits distribution • IP multicast packet must have TTL > interface TTL or it is discarded • values are: 0=host, 1=network, 32=same site, 64=same region, 128=same continent, 255=unrestricted • No longer recommended as a reliable scoping mechanism 105 Multicast Addressing draft-albanna-iana-ipv4-mcast-guidelines-01 http://www.iana.org/assignments/multicast-addresses Examples of Reserved & Link-local Addresses • • • • • • • • • • • 224.0.0.0 - 224.0.0.255 reserved & not forwarded 239.0.0.0 - 239.255.255.255 Administrative Scoping 224.0.0.1 - All local hosts 224.0.0.2 - All local routers 224.0.0.4 - DVMRP 224.0.0.5 - OSPF 224.0.0.6 - Designated Router OSPF 224.0.0.9 - RIP2 224.0.0.13 - PIM 224.0.0.15 - CBT 224.0.0.18 - VRRP 106 Dynamic Address Allocation SDR – The defacto… 224.2.0.0 - 224.2.255.255 (224.2/16) SDP/SAP Block Still used, but not required Will not scale well Limited address space Single directory application for ALL content?!?! Web links should should prevail. 107 Multicast Addressing Administratively Scoped Addresses – rfc2365 239.0.0.0–239.255.255.255 Private address space • Similar to RFC1918 unicast addresses • Not used for global Internet traffic • Used to limit “scope” of multicast traffic • Same addresses may be in use at different locations for different multicast sessions Examples • Site-local scope: 239.253.0.0/16 • Organization-local scope: 239.192.0.0/14 108 Multicast Addressing GLOP addresses Provides globally available private Class D space 233.x.x/24 per AS number RFC2770 How? AS number = 16 bits • Insert the 16 ASN into the middle two octets of 233/8 Online Glop Calculator: www.shepfarm.com/juniper/multicast/glop.html 109 Multicast Addressing SSM - draft-ietf-ssm-arch-*.txt (almeroth) 232/8 – IANA assigned One-to-many ONLY (no shared trees) Guaranties ONE source on any delivery tree • Content security (no ‘Captain Midnight’) Reduced protocol dependance – more later.. Solves address allocation issues for interdomain one-to many • ~tree address is 64 bits – S,G Host must learn of source address out-of-band (web page) Requires host-to-router source AND group request • IGMPv3 include-source list Hard-coded behavior in 232/8 (JunOS & IOS) • draft-ietf-mboned-ssm232-01.txt • Configurable to expand range 110 ASM multicasting PIM-SM SM stands for “Sparse Mode.” RFC 2362 and draft-ietf-pim-sm-v2-new-06.txt There is also a Dense Mode but we don’t recommend using it. Cisco has a proprietary “Sparse-Dense” mode which they use for RP discovery. PIM-SM allows for both Shared Trees (STs) from a Rendezvous Point (RP) and Shortest Path Trees (SPTs) from the source. Note that STs are shortest path, just to the RP. There are two ways to use PIM-SM… 111 ASM and SSM ASM: Any-Source Multicast. Traditional multicast – data and joins are forwarded to a Rendezvous Point (RP). all routers in a PIM domain must have RP mapping when load exceeds threshold, forwarding swaps to shortest path tree. The default threshold is one packet; in this case, the sole purpose of the ST is to learn which sources are active (with IGMPv2, the receiver can only specify the group, not specific sources.) state increases (not everywhere) as number of sources and number of groups increase source-tree state is refreshed when data is forwarded and with Join/Prune control messages SSM: Source-Specific Multicast. PIM-SM without RPs – instead, source is learned out-of-band, and shortest-path tree is built directly to it. 112 113 Other AG Networking Issues The importance of Beacon -Keep good relationship with network people -what to do if network failure Other networking issues -IPv6 QoS -SSM 114 The importance of Beacon Get network staffs involved EARLY and OFTEN Run beacons continuously Steps to take before falling back to Bridges 115 Campus Networking Campus Network folks are always overworked, usually scarred, and rightfully conservative. Beacon technology is key: Demonstrate the reality of multicast deployment Help give immediate feedback on changes Watch how stable the routing is Management communication to Campus Networking Do Not Wait until the last minute to make it work Invite them to join the Access Grid MOO Ask them to provide Read-Only Access to routers Argonne, NCSA, and other people are eager to help 116 Why Run a Beacon? Multicast Routing requires active sources to test Multicast Routing can fail over time (known bugs) Campus networking folks need to see how effective (or not) their configuration is working You need to see whether your multicast is working 117 What if multicast breaks? Use the back channel. If networking staff will be online, primed to help. Have your local campus networking people on call (telephone, pager, cell phone) in case the problem turns out to be in your campus network. Try to give the network people 5-10 minutes to resolve the problem before doing anything yourself. Stopping your source makes it impossible to debug! After 10 minutes of quick effort, an alternate network path may be created for you while network debugging continues. Be prepared to switch back at a break. 118 Other networking issues IP v6 QoS SSM 119 IP Multicast Deployment Starting to see IP Multicast moving out of the research arena into commercial use http://www.on-the-i.com/cmn/cmninfo.html http://www.broadcast.com/broadband/television/fashi ontv Finding and squashing corner case bugs and operational vulnerabilities in protocols and code MSDP RPF rules MSDP SA flooding without rate limits 120 IP Multicast Deployment More institutions and networks are getting multicast deployed and working properly In the US, SC Global are the major driver for IP Multicast deployment internationally. How about in Taiwan? 121 IPv6 Much larger address space: 128 bits Starting to be supported by mainstream operating systems and network vendors: Cisco: unsupported code out now; first unicast-only and slow production version due out in 1Q2001. Faster and multicast later. Microsoft: experimental patches available now for Windows 2000. (Be careful!) Linux, AIX, others: full host support. IPV6 Multicast routing standards are not yet complete, especially those that work across Autonomous Systems. 122 IPv6 Experimental IPv6 networks up and running 6bone, 6tap Generally use IPv6 over IPv4 tunnels IPv6 address assignments are available from major networks participating in IPv6 trials ESNet, Abilene, vBNS+, and others 123 Quality of Service General idea: preferred queuing and/or discard policy for “special” packets. Not yet clear how to define “special” for QoS purposes. (Biggest problem, IMHO) Hardware implementations are slowly becoming available. Signaling standards very much in flux— especially those that cross administrative boundaries. 124 Quality of Service Questions: what Access Grid traffic can be thrown away first, minimizing impact on the experience? Which need to be preserved? Which can’t be delayed? Given the answer to that, how can the packets be marked so they can be detected by the network? AG software tools may need enhancements 125 Source Specific Multicast Lets receivers specify from which sources they want to receive traffic. Requires IGMPv3. Application must know about all active sources without the network helping. Current IGMPv2: Join (G) IGMPv3: Join (S,G), Prune (S,G), etc. Will AG become an IGMPv3 application? 126 Varieties of AG 127 Varieties of AG Open Mash VRVS(Virtual Rooms Videoconferencing System) Berkeley Internet Webcasting System (BIBS) ACE(Advanced Collaborative Environments ) Access Grid Augmented Virtual Environments (AGAVE) ,Tele-Immersion - CAVE, IDesks, Tiled Displays, auto stereoscopic displays Community Grid Computing Labs,Indiana University,Web Services and Peer-to-Peer Technologies for the Grid AG-ANL…… inSORS – commercial AG 128 Open Mash 它是 Open Source Project,起源2001,由U.C. Berkeley (NSF) 主導 Open Mash (Mash streaming media toolkit and distributed collaboration applications based on the Internet Mbone tools and protocols.) 針對VIC做修改(加入H263),DV,MPEG4. http://sourceforge.net/projects/openmash 129 VRVS 3.0 起源1995 ,Virtual Rooms Videoconferencing System,由Caltech (California Institute of Technology) 主導 Web-based system. 具booking與h323整合功 能 Reflectors, provides a pure software-based MCU with peer to peer structure, Reflectors之間具routing 功能 3.0版建立VAG (VRVS AG Gateway; or Virtual Access Grid) 130 Berkeley Internet Webcasting System (BIBS) 起源2001,由University of California at Berkeley主導 目前進度停溜,2002想完成vnc+vic,但目前仍無進度 架構不具彈性 131 BIBS System Architecture Video Web Server Video Gateway Computer Internet Database Server Streaming Server … Studio Classroom Videotape Recorder 132 Commercial AG http://www.insors.com 133 NCHC and AG 134 NCHC and AG Summary of NCHC’s AG activities Deployment status of AG in Taiwan 135 NCHC & AG 10/2002 – deployed 1st AG node at Hsin-chu 01/2003 – deployed 2nd node at Tainan 03/2003 - attended GGF7 AG 2.0 workshop 04/2003 – held Access Grid user group meeting in Long-tang 04/2003 – Taiwan AG website 05/2003 – SARS Grid 05/2003 – installed Beacon server 06/2003 – deployed two perpetual bridge servers at both Hsin-chu and Tainan 08/2003 – a dome of AG at the press conference 136 Major deployment activities •During SARS outbreak •Distance Learning •Other activities 137 Some SARS background info… When the medical staffs of several key hospitals in Taiwan were quarantined by the SARS epidemic, threatening to make a critical situation even worse, computer scientists there, in the U.S., and throughout the Asia Pacific Rim turned to grid computing technology and old-fashioned teamwork. PRAGMA -- the Pacific Rim Applications and Grid Middleware Assembly -- showed how relationships and expertise developed to tackle computational research could also help thousands of SARS patients in Taiwan. GRID TECHNOLOGY AND THE FIGHT AGAINST SARS By Mike Gannis, SDSC Senior Science Writer 138 SARS and AG •There were designated hospitals to be quarantined. •Doctors and patients in those hospitals would be quarantined •But there would be new discoveries about the SARS form those hospitals •Quarantined doctors had many valuable first-hand data, eg SARS patients` X-ray, to discuss with other doctors and the only way is via video conferencing •And AG could help 139 SARS GRID Touch pad control Audio equip/mixer DVD recorder Video matrix switch KVM switch H323 MCU (vs4000) Ethernet switch Video capture PC Display PC 140 SARS GRID 141 Currently… AG nodes deployed in medical community 疾病管制局(CDC) 林口長庚醫院(LCG) 台北仁愛醫院(TJA) 台北三重醫院(TSC) 142 Training doctors via AG 143 AG & Collaborative teaching What? • Giving lectures collaboratively • Means several teachers for one class How? • Demo the possibility on Aug 14. 2003 • 5 professors/universities in Taiwan involved • The facilitator was joining the session from Australia • Session was via NCHC’s bridge server Hope to give accredited courses via AG in this coming fall 144 AG & Collaborative Teaching 145 AG & Collaborative Teaching cont’ 146 AG nodes in NCHC •Hsin-chu AG node •Tainan AG node 147 Access Grid Deployment status in Taiwan 仁愛醫院 三重醫院 林口長庚 淡江 政大 疾病管制局 竹科 東海 逢甲 中興 東華 澎湖 虎尾 南科 成大 中山 屏科大 台東 竹科及南科節點 目前已建置節點 目前已建置之醫院 148 Other activities 149 150 Future works More friendly user interface Vic codec modification Security Inter-domain multicast AG over VPN Keep being supportive for any future AG deployments in Taiwan 151 For more information Access Grid Technology Helps Taiwan Doctors Quarantined By SARS (Grid computing planet.com, 06/03/2003) http://www.gridcomputingplanet.com/news/article.php/2216671 High-Tech Collaboration Helps Taiwan Fight SARS (SDSC, 06/03/2003) http://www.sciencedaily.com/releases/2003/06/030603083806.ht m GRID TECHNOLOGY AND THE FIGHT AGAINST SARS (Grid today, 06/09/2003) http://www.gridtoday.com/03/0609/101510.html 152 For more information (cont’) NCHC's AG software development http://140.110.60.99:8080/SARS_GRID/ Photo of SARS grid http://sarsgrid.nchc.org.tw/ TW AG http://140.110.61.10/modules/newbb/ 153 Thanks for listening!