Download An Overlay Data Plane for PlanetLab Department of Computer Science Princeton University

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Distributed firewall wikipedia , lookup

Zero-configuration networking wikipedia , lookup

Piggybacking (Internet access) wikipedia , lookup

RapidIO wikipedia , lookup

Backpressure routing wikipedia , lookup

Peering wikipedia , lookup

Distributed operating system wikipedia , lookup

Multiprotocol Label Switching wikipedia , lookup

Wake-on-LAN wikipedia , lookup

Lag wikipedia , lookup

Airborne Networking wikipedia , lookup

Computer network wikipedia , lookup

Net bias wikipedia , lookup

Deep packet inspection wikipedia , lookup

IEEE 1355 wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

CAN bus wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

IEEE 802.1aq wikipedia , lookup

Kademlia wikipedia , lookup

Routing wikipedia , lookup

Peer-to-peer wikipedia , lookup

Routing in delay-tolerant networking wikipedia , lookup

Transcript
An Overlay Data Plane for PlanetLab
Andy Bavier, Mark Huang, and Larry Peterson
Department of Computer Science
Princeton University
Princeton, NJ, 08540
acb, mlh, llp @cs.princeton.edu
Abstract
PlanetLab has significantly lowered the barriers to
deploying overlays. This paper describes how to lower
them further by introducing a general data plane for
overlay networks. Researchers can plug their own control planes into this data plane to easily run fully functional overlays on PlanetLab, and students can use the
overlay data plane as a hands-on tool for learning about
internetworking concepts. The overlay data plane is the
first step in the “Internet In A Slice” project, which will
let members of the PlanetLab community experiment
with virtual Internets running on PlanetLab.
1. Introduction
Novel network services are currently being deployed
in overlays, rather than in the routers and protocols that
compose the Internet. A key reason for this is that
the barriers to innovation at the overlay level are much
lower. For example, Skype clients connect to a local “super node” to patch into a P2P overlay that carries VoIP
traffic [1, 5]. Robust routing overlays such as RON can
route traffic along alternate paths when the default Internet path fails or becomes congested [4, 8]. In the last
year, SIGCOMM, NetGames, and IPTPS were broadcast using a multicast overlay [18, 19]. OpenDHT [11]
is a public distributed hash table overlay that, among
other things, currently caches the FreeDB CD-metadata
service. Finally, as a historical point, we note that the
Internet itself was originally implemented as an overlay
on the telephony network because this was easier than
changing the underlying network—that is, for much the
same reason that innovators are choosing to overlay new
network services on the Internet today.
We assert that the barriers to innovation using overlays could be lower still. Consider the problem of constructing a real routing overlay; one needs at least:
1. a forwarding engine for the packets carried by the
overlay (an overlay router);
2. a mechanism for clients to opt-in to the overlay and
divert their packets to it, so that the overlay can
carry real traffic (an overlay ingress);
3. a means of exchanging packets with servers that
don’t know anything about the overlay, since most
of the world exists outside of it (an overlay egress);
4. a smart method of configuring the engine’s forwarding tables (a control plane); and
5. a collection of distributed machines on which to deploy the overlay, so that it can be properly evaluated
and can attract real users.
Before PlanetLab, the fifth item above was the main
barrier to deploying large-scale overlays. PlanetLab [6,
14] is a geographically distributed overlay platform designed to support the deployment and evaluation of
planetary-scale network services. It currently includes
over 500 machines spanning 252 sites and 29 countries,
and has supported over 450 research projects. PlanetLab can support large-scale overlays running in slices
(i.e., collections of distributed virtual machines). New
network services can be experimentally evaluated, and
widely deployed, by tunneling traffic between overlay
instances running on PlanetLab nodes all over the world.
PlanetLab gives network service designers the opportunity to demonstrate the scalability and robustness of
their ideas by carrying real traffic, on behalf of real
users, while coping with the inherent unpredictability of
the real Internet. The multicast and OpenDHT overlays
mentioned above are both hosted on PlanetLab.
The first four items in the above list comprise the
software that actually runs the overlay. The first three
(the forwarding engine, ingress, and egress mechanisms)
form its data plane, in contrast to the fourth item, the
overlay control plane. We note that many of the recent
innovations in network services (including the examples
mentioned earlier) impact the control plane of the overlay while being fairly agnostic as to the form of the overlay’s data plane. In fact, our sense is that there is a significant community of overlay researchers who are mainly
interested in working on the control plane. This community would be happy if someone provided an overlay
data plane that they could “plug into” and start experimenting with their routing ideas right away.
This paper describes a data plane for overlay networks provided as part of the “Internet In A Slice”
(IIAS) overlay. The ultimate goal of this work is to let
members of the PlanetLab community run a virtual Internet in a slice, running real routing protocols on top of
a routing mesh that reflects the topology of the underlying Internet. The IIAS data plane forms an important
piece of this vision. However, we believe that, by itself,
the IIAS data plane lowers the barriers to building overlays and so we are making it available now.
The paper makes two contributions. First, it lays out
the motivation behind IIAS and describes the complete
IIAS overlay that we are building. Ultimately the IIAS
overlay will run a control plane that mirrors the Internet;
we believe that, after validating incremental changes to
the Internet using simulations and isolated testbed experiments, researchers will use IIAS as the next step in
transitioning their ideas to the “real world”. Second, it
presents the IIAS data plane in detail and describes how
to use it in a standalone form to build overlays. The data
plane comes bundled with simple tools to bring up the
overlay with a set of default routes, and to dynamically
add or change routes in the running overlay. We expect
that users will leverage these tools to build sophisticated
control planes.
The road map for the paper is as follows. Section 2
provides the motivation behind Internet In A Slice and
describes its potential role in networking research. Section 3 presents an example overlay built on the IIAS data
plane and discusses its component parts. Section 4 evaluates the performance of the IIAS data plane running
on PlanetLab, Section 5 discusses future directions, and
Section 6 concludes the paper.
2. Motivation
Clearly the Internet is the networking research community’s greatest success story, yet the Internet’s very
success has led to its ossification [15]. Today, significant
obstacles hamper the adoption of new architectural features in the Internet. The need for consensus in a multiprovider Internet is one of the largest barriers. For example, adopting a new version of BGP not only requires
changing routers, but also may require that competing
ISPs jointly agree to the change. Economics is another
factor; IP multicast is widely available in routers, yet
there has been little financial incentive for ISPs to enable it. A third obstacle is the difficultly of evaluating
a new idea’s real technical merit a priori: many believe
that extensive simulations are no substitute for running
in the “real world”.
A key goal of Internet In A Slice is to give networking researchers another tool for attacking these and
other barriers to getting their research adopted. First,
they can sidestep the need for consensus by incrementally deploying their systems on PlanetLab. Second,
they can demonstrate technical merit along the dimensions of scalability and robustness: running their code
on hundreds of PlanetLab nodes worldwide shows scalability, and keeping it running shows robustness. Finally, PlanetLab’s presence at hundreds of universities
gives researchers the opportunity to attract traffic from
real users; for instance, the CoDeeN [17] and Coral [7]
CDNs running on PlanetLab distribute on the order of a
terabyte of traffic to a half million unique IP addresses
daily. Researchers can build the case that their ideas
have economic value by attracting and keeping these
“customers”.
IIAS will let a researcher run a virtual Internet, consisting of a data plane and a control plane, on PlanetLab.
The IIAS data plane will be described in Section 3. The
control plane will integrate two components, each running on every node in a particular instance of the IIAS
overlay: a topology discovery service and a suite of Internet routing protocols. The topology discovery service
helps the overlay nodes self-organize into a routing mesh
that represents the structure of the underlying network.
The routing protocol suite contains a complete set of
commonly used Internet routing protocols; it propagates
routes advertised by particular overlay nodes through the
routing mesh, and populates the data plane with a set of
next hops for each overlay node.
We leverage PLUTO [13] and XORP [9] for the
topology discovery service and routing protocol suite,
respectively. First, PLUTO helps overlays make informed decisions about peering and routing by collecting network topology information (e.g., BGP data, TCP
statistics, ping and traceroute results) and using it to
answer high-level queries from overlays about the network. For example, an overlay can find out what autonomous system it is in and what other AS’s peer with
it; find an overlay path between two nodes that is disjoint from the Internet’s default path; or get the set of
overlay nodes that have the lowest latency from a particular node. We plan to use the routing mesh service of
PLUTO to choose peering relationships between overlay
nodes that mirror the peering relationships in the Inter-
net itself. Second, XORP (the eXtensible Open Router
Platform) provides a rich open-source suite of IPv4 and
IPv6 routing protocols. Currently XORP supports BGP,
RIP, PIM-SM, and IGMP/MLD, with more in the works.
XORP operates on a forwarding engine abstraction that
maps well onto the Click router [12]; the IIAS data plane
is built using Click, as will be described in Section 3.
The IIAS overlay will essentially reflect the way that
the Internet works today. Using the routing mesh and
AS topology information from PLUTO, IIAS nodes will
exchange routes using RIP with peers in the same AS
while using BGP with peers in other AS’s. Once peering relationships are established, each node in the overlay can advertise routes for hosts that are near it in the
network. These routes then are propagated throughout
the overlay, and packets for a particular destination are
forwarded to the overlay node that advertised the best
route for that destination. We believe that IIAS will provide the logical next step, after simulations and isolated
testbed experiments, for researchers to demonstrate the
value of their ideas—by running experiments with real
traffic at a global scale.
3. IIAS Data Plane
This section describes the IIAS data plane. We first
describe a simple overlay example and explain how to
configure the IIAS data plane using the tools distributed
with it. Individual nodes in the overlay are implemented
using Click [12]; we then present an overview of the
Click configurations of ingress, egress, and router nodes
in the overlay. We conclude the section by detailing the
system-level path that a packet takes through the IIAS
data plane from the client to the server and back to the
client.
3.1. Example Overlay
We drive the discussion of the IIAS data plane
using a very simple overlay: carrying the traffic of
clients browsing www.nationalgeographic.com
over Internet2 (I2), a high-bandwidth academic research
network in the United States. Clients connect to this
overlay at a local PlanetLab node that is connected to I2;
packets destined to National Geographic’s web site in
Washington, DC are routed to a nearby PlanetLab node
at the University of Maryland (UMD). By default, traffic flowing between these PlanetLab nodes will be carried across I2. At UMD the packets leave the overlay
and travel across the public Internet to National Geographic’s server; the server’s reply travels the opposite
path back to the client. While not particularly interesting from a research standpoint, this example lets us
C1
I
C2
princeton
C3
NAT
C4
I
E
stanford
umd
PPTP/GRE
tunnels
C5
www.nationalgeographic.com
UDP
tunnels
I
wustl
IIAS overlay
Figure 1. Example Routing Overlay
present the basic features and capabilities of IIAS. Also
note that, though our example is purely static, we provide scripts to dynamically add nodes and routes to the
IIAS data plane as well.
Figure 1 shows the example overlay. The overlay
has ingress nodes (labeled I) at Princeton, Stanford, and
Washington University (WUSTL), and an egress node
(labeled E) at UMD. The ingress and egress nodes are
both running specific configurations of Click that will
be described later, but the client and server require no
special software to participate in the overlay. In practice
an overlay could support many more ingress and egress
nodes, as well as router nodes that only forward packets
between other nodes in the overlay.
Simple scripts distributed with the IIAS data plane
make it easy to configure the overlay described above.
For example, to construct the overlay of Figure 1, the
user would create an overlay.cfg file as follows:
%Nodes = (
"princeton"
"stanford"
"wustl"
"umd"
);
=>
=>
=>
=>
"planetlab-7.cs.princeton.edu",
"planetlab-1.stanford.edu",
"vn2.cs.wustl.edu",
"planetlab2.cs.umd.edu",
%Dests = (
"natgeo" => "www.nationalgeographic.com",
);
# End-to-end overlay paths
%Routes = (
"princeton > umd > natgeo",
"stanford > umd > natgeo",
"wustl > umd > natgeo",
);
The overlay.cfg contains Perl data structures
that describe a static overlay. For brevity and clarity, the overlay routes are constructed using nicknames.
The first two data structures map nicknames onto DNS
names for the nodes participating in the overlay, and
the destinations to which the overlay will route packets.
In our example, we have four nodes and one destination. The last data structure lists the end-to-end paths
through the overlay. Note that each end-to-end path actually corresponds to two sets of routes: one from the
ingress to the egress for the given destination, and one
from the egress to the ingress for the clients connected
to that ingress. Once the overlay.cfg file contains
the desired overlay specification, the IIAS scripts generate Click configuration files for each node in the overlay, distribute the Click executable and the configuration
files to all of the nodes, and invoke Click on each node.
Note that, while these scripts are useful for configuring a set of initial routes through the overlay, they are
not that interesting in themselves. We expect that users
will build on them to produce more dynamic and sophisticated overlay control planes.
3.2. Node Configurations
The Click modular router project [12] provides an
easy-to-use tool for building packet routers in software.
The Click community has implemented hundreds of individual elements to perform routing, tunneling, address
rewriting, packet scheduling, filtering, flow accounting,
tracing, and so on. One advantage of Click is that it
combines ease of prototyping and high performance:
Click routers can be implemented and debugged in user
space, and then recompiled for loading as a Linux kernel module. For example, on a 700MHz Pentium III, a
Click IP router running in the kernel can forward up to
435Kpps [2]. A Click router is a graph of modules called
elements; the elements process and transform packets as
they flow through the graph.
In the IIAS data plane, there are three distinct
Click configurations, corresponding to (internal) router,
egress, and ingress overlay nodes. We describe these
next.
3.2.1. Router Node
A router node is always an intermediate node on an
end-to-end path through the data plane, meaning that
it only tunnels packets to other nodes in the overlay.
Though there are no router nodes in Figure 1, the ingress
and egress nodes build on the basic functionality of the
router node and so we describe it first.
Suppose the overlay added a router node on the
path between the Princeton ingress node and the UMD
egress node; Figure 2 shows the configuration of this
node. IP packets from other overlay nodes arrive on
a UDP socket listening on a particular port, denoted
FromUDPTunnel in the figure. Conceptually this single element represents the logical endpoint of multiple
Destination
princeton_client
natgeo
default
FromUDPTunnel
Gateway Out
princeton 0,
umd
0,
1
IPRouteLookup
1
0
ToUDPTunnel
Discard
Figure 2. Router Node
UDP tunnels, since all the tunnels use the same UDP
port.
Packets arriving from the overlay are pushed to the
route lookup element, denoted IPRouteLookup. The
box above this element shows its forwarding table, containing (Destination, Gateway, Out) 3-tuples (though
sometimes the Gateway is omitted). This table has three
entries: packets destined to a Princeton overlay client
are forwarded to gateway princeton on interface 0;
packets destined for National Geographic’s server are
forwarded to gateway umd on interface 0; and all other
packets are emitted on interface 1. Packets emitted onto
interface 0 are pushed to the ToUDPTunnel element,
while packets emitted onto interface 1 are discarded—in
this case, the overlay node has no default route.
Finally, the ToUDPTunnel element tunnels packets
on a UDP socket to the next hop in the overlay. If the
route lookup element found a Gateway for the packet’s
destination, it tagged the packet with the gateway’s address; ToUDPTunnel extracts the tagged address and
sends the IP packet to a UDP port on the gateway node.
3.2.2. Egress Node
An egress nodes exchanges packets with other overlay
nodes using UDP tunnels, and also communicates with
servers via NAT. Therefore an egress node looks like a
router node plus some NAT elements. Figure 3 shows
the University of Maryland egress node’s configuration
in our example overlay.
The egress node in our example has routes for all of
the clients connected to ingress nodes. Also, the egress
node’s default route sends the packets onto the local network through NAT rather than discarding them. Two
new elements are added to the configuration to handle
NAT: ToNAT and FromNAT. Element ToNAT rewrites
the source IP address of outgoing packets to the egress
node’s local address. Since the packet received by the
server has the egress node as its source, the server replies
to the egress. Element FromNAT receives packets from
the server and rewrites the destination IP address back
Destination
princeton_client
stanford_client
wustl_client
default
FromUDPTunnel
3.3. Systems View
Gateway Out
princeton 0,
stanford
0,
wustl
0,
1
IPRouteLookup
0
ToUDPTunnel
1
FromNAT
ToNAT
Figure 3. Egress Node
Destination
natgeo
princeton_client
default
FromUDPTunnel
Gateway Out
umd
0,
princeton 1,
2
IPRouteLookup
2
FromPPTP
0
1
ToUDPTunnel
ToPPTP
FromNAT
ToNAT
Figure 4. Ingress Node
to that of the appropriate client.
3.2.3. Ingress Node
An ingress node may send packets into the overlay, onto
the local network via NAT, and also to clients connected
via PPTP/GRE tunnels. Since egress nodes also handle
UDP overlay tunnels and NAT, an ingress node is essentially an egress node that also handles PPTP. PPTP is
Microsoft’s VPN protocol; we chose PPTP/GRE as the
client connection mechanism because it comes bundled
with Windows, has a free client for Linux, and is handled by most firewalls. This makes it easy for an overlay
that offers a valuable service to attract real users. Figure
4 shows the Princeton node in the example overlay.
On the ingress, the route lookup tunnels packets destined to National Geographic’s server to the UMD node,
while packets for a local client are sent on a PPTP/GRE
connection. The FromPPTP and ToPPTP elements
handle the PPTP connection established by clients. Element FromPPTP receives packets from the GRE tunnels that PPTP sets up between the ingress and clients,
and feeds the packets into the route lookup element. Element ToPPTP demultiplexes packets sent to it to the
appropriate GRE tunnel based on the IP destination.
Finally, to tie together the threads of this discussion,
we describe in detail the path taken by request packets
from the client to the server, and the reply path back to
the client, in the overlay shown in Figure 1.
A client connects to Click running on an ingress node
using PPTP. The PPTP negotiation assigns an IP address
for use by that client within the overlay (selected from
a private address block assigned to that ingress), as well
as a GRE identifier. Once established, the PPTP/GRE
tunnel appears to the client as just another interface; the
client’s routing table can be configured to route arbitrary
traffic into the overlay. We will assume that client C1 has
configured its routing table to forward all its packets into
the overlay via its PPTP tunnel.
Suppose that client C1 sends a packet to National Geographic’s web server. The Click instance
running on the Princeton ingress node receives the
packet from the PPTP/GRE tunnel, strips its GRE
header, and consults its forwarding table (shown in
Figure 4). Since this particular packet is destined to
www.nationalgeographic.com, Click tunnels it
over UDP to a specific port (in our example, port 4700)
on the egress node at University of Maryland. If the
packet had some other destination, Click would use NAT
to send the packet out into the local network.
The Click instance running on the egress node at
UMD is listening on UDP port 4700 for packets arriving
from within the overlay, and C1’s packet arrives there.
Click matches the packet to the default route in its forwarding table (shown in Figure 3). Therefore, Click’s
NAT module rewrites the packet’s source IP address (to
that of the egress node) and source port, and sends it
on the local network. The packet then travels across the
public Internet to National Geographic’s server.
To the server, it appears that the egress node initiated the request, and so the server replies to the egress.
The egress Click instance receives a reply packet from
its NAT module, which rewrites the packet’s destination
address and port to those of the original request (i.e., the
IP address is rewritten to the private address of the client
C1). Click again consults its forwarding table, where
it has a separate entry for each private address block
assigned to an ingress node. Click discovers that the
packet’s destination is a Princeton client, and so it sends
the packet to UDP port 4700 on the Princeton node.
The Princeton ingress node’s Click instance also listens on UDP port 4700 for traffic from within the overlay, and it receives the packet from the egress node. The
forwarding table lookup reveals that the packet is destined to C1; Click sends the reply packet through the
PPTP/GRE tunnel to the client.
Overlay hops
LAN, no overlay
One (I)
Two (I + E)
Three (I + R + E)
Latency
2.4ms
3.7ms
4.5ms
5.5ms
Throughput
13.1Mbps
4.9Mbps
4.5Mbps
4.5Mbps
Table 1. IIAS overlay performance
4 Evaluation
One objection to user-space routing overlays is that
they can add significant performance penalties to the
end-to-end path between client and server. In order to
attract real users, the overlay must be able to find “better” paths than the default path after factoring in this
overhead. This section characterizes the latency and
throughput of a simple IIAS overlay running on PlanetLab. It also estimates how many IIAS overlays that
PlanetLab can support simultaneously.
All of the experiments in this section run one- to
three-node overlays on machines at the Princeton PlanetLab site. The client is a laptop running Windows XP
and the server is a local Linux machine. The singlenode case consists only of an ingress node between the
client and server; since ingress nodes are NAT proxies
for traffic not directed into the overlay, this case measures the overhead of NAT. Traffic traversing an overlay
will always go through an ingress and an egress node,
and this is the configuration of the two-node case. Finally, the three-node case interposes a router node between the ingress and egress to reflect a more complex
overlay.
We run our experiments on active PlanetLab machines because we are interested in giving a flavor of
the latency that actual users of IIAS can expect to see in
practice. The drawback to this approach is that there are
many different sources of latency for a routing overlay
running on PlanetLab: for instance, the cost of copying
packets up to user space and back to the kernel; time
spent waiting for Click to run on the CPU; and token
bucket traffic shaping on the outgoing link. We do not
try to break down the latency by source, but clearly this
could be done using isolated boxes running the PlanetLab software.
We use netperf [3] to measure the round-trip latency (via UDP) and throughput (via TCP) of a path
that traverses zero to three overlay hops. Table 1 summarizes the results of our experiments. First, the UDP
round-trip latency introduced on the client-server path
by traversing our various overlay configurations is on
the order of a millisecond per node. This seems surprisingly low, considering that the PlanetLab nodes we used
had load averages between 3 and 10 with 100% CPU
utilization during our tests. Since all of the PlanetLab
nodes are connected on the same LAN, we are assuming
(conservatively) that the latency penalty from adding a
node to the overlay path comes entirely from host overhead. Second, the TCP throughput tests show that our
overlay supports about 4.5Mbps throughput between the
client and server. Though this compares favorably with
much of the public Internet, we believe that there is significant room for improvement. Optimizing PlanetLab’s
data plane path—enabling overlays to achieve significantly faster throughput and negligible latency—is an
active area of PlanetLab research.
We expect that multiple groups will be interested in
simultaneously running IIAS overlays on PlanetLab. A
question here is, how many such overlays can PlanetLab support? The two resources of main concern to a
routing overlay are CPU and bandwidth. Running top
on an IIAS overlay node indicates that, on a 3GHz Pentium 4 machine, Click consumes about 1% of CPU for
each 1Mbps of traffic forwarded. Most PlanetLab sites
cap the outgoing bandwidth of their nodes at 10Mbps.
Therefore, contention for bandwidth rather than CPU is
likely to be the issue on these nodes. On the other hand,
on hosts with 100Mbps NICs and no bandwidth limits
(e.g., most I2 nodes), CPU is likely to be the scarce resource. In either case, we estimate that each PlanetLab
node can only support a small number of IIAS overlays
without resource contention becoming a potential issue.
We note, however, that this does not mean that PlanetLab itself can only support a small number of IIAS
overlays. Each site has at least two PlanetLab nodes,
and a particular overlay probably does not need to run at
all sites. Therefore overlay operators could partition the
set of PlanetLab sites among themselves to avoid collision. With some coordination between IIAS overlay operators, we envision that PlanetLab can support on the
order of tens of IIAS overlays running simultaneously.
5 Future Directions
One purpose of this paper is to begin assembling
a community of system builders around IIAS, and so
we now describe some of its future problems and challenges.
Currently clients connect to an IIAS overlay using
PPTP, so that fairly naı̈ve users with unmodified Windows systems can easily connect to an overlay. However, PPTP has its drawbacks. First, the control connection always utilizes TCP port 1723, and since only one
slice can bind each port, this means that each PlanetLab
node can currently be an ingress for only one overlay. A
solution would be to run a “PPTP service” on this port.
The client could tell this service which overlay it wanted
to connect to (e.g., by specifying user@overlay as
the user name), and the service could then set up the
GRE tunnel and hand it off to the overlay. Of course,
PPTP is only one method of connecting to an overlay;
others are used in Detour [16] and i3 [10]. We are interested in porting such methods to our IIAS framework.
So far we have discussed a model where client machines outside of the overlay splice into it. However, an
overlay running on PlanetLab also has the opportunity to
attract traffic from other slices running on the same machine. Mechanisms that allow PlanetLab slices to easily
specify overlays running in other slices as carriers for
their packets may open up new areas of research for network service builders.
Finally, the IIAS framework can support prototyping and evaluating radically new network architectures
inside an overlay. Though most Click elements are
Internet-centric, there is nothing preventing one from
writing Click routing elements that use a completely
new addressing scheme. That is, a packet entering the
ingress node would transition to the overlay’s new addressing scheme, be routed through the overlay using
its own internal addressing, and at the egress node the
packet would return to IP routing. Overlays such as IIAS
may provide a viable method of demonstrating and experimenting with the network architectures of the future.
6 Conclusion
The ultimate goal of the Internet In A Slice project
is to enable networking researchers to experiment with
virtual Internets on PlanetLab. The IIAS data plane represents our first step toward that goal. We believe that
this data plane further lowers the barriers to deploying
large-scale overlays on PlanetLab: it can either be used
directly, with each overlay substituting a novel control
plane, or it can serve as a template for a radically different data plane. We encourage overlay researchers to
incorporate the IIAS data plane into their overlays, and
we enlist the support of others who share the IIAS vision
to help us build a thriving development community.
References
[1] Skype FAQ. http://www.skype.com/help/faq/.
[2] The
Click
Moduar
Router
Project.
http://www.pdos.lcs.mit.edu/click/.
[3] The
Public
Netperf
Homepage.
http://www.netperf.org/netperf/NetperfPage.html.
[4] D. Andersen, H. Balakrishnam, F. Kaashoek, and
R. Morris. Resilient Overlay Networks. In Proceedings of the 18th ACM Symposium on Operating Systems
Principles (SOSP), pages 131–145, October 2001.
[5] S. A. Baset and H. Schulzrinne. An Analysis of the
Skype Peer-to-Peer Internet Telephony Protocol. Technical Report CUCS–039–04, Department of Computer
Science, Columbia University, December 2004.
[6] A. Bavier, M. Bowman, D. Culler, B. Chun, S. Karlin, S. Muir, L. Peterson, T. Roscoe, T. Spalink,
and M. Wawrzoniak. Operating System Support for
Planetary-Scale Network Services. In Proceedings of
the First Symposium on Networked System Design and
Implementation (NSDI), Mar. 2004.
[7] M. J. Freedman, E. Freudenthal, and D. Mazières. Democratizing Content Publication with Coral. In Proceedings of the First Symposium on Networked System Design and Implementation (NSDI), March 2004.
[8] K. P. Gummadi, H. V. Madhyastha, S. D. Gribble, H. M.
Levy, and D. Wetherall. Improving the Reliability of
Internet Paths with One-hop Source Routing. In Proceedings of the Sixth USENIX Symposium on Operating
System Design and Implementation (OSDI), December
2004.
[9] M. Handley, O. Hodson, and E. Kohler. XORP: An
Open Platform for Network Research. In Proceedings
of HotNets-I, October 2002.
[10] J. Kannan, A. Kubota, K. Lakshminarayanan, I. Stoica,
and K. Wehrle. Supporting Legacy Applications over
i3. Technical Report UCB/CSD-04-1342, University of
California at Berkeley, May 2004.
[11] B. Karp, S. Ratnasamy, S. Rhea, and S. Shenker.
Spurring Adoption of DHTs with OpenHash, a Public
DHT Service. In Springer-Verlag Lecture Notes in Computer Science Hot Topics Series, February 2004.
[12] E. Kohler, R. Morris, B. Chen, J. Jannotti, and M. F.
Kaashoek. The Click modular router. ACM Transactions
on Computer Systems, 18(3):263–297, 2000.
[13] A. Nakao, L. Peterson, and A. Bavier. A Routing Underlay for Overlay Networks. In Proceedings of the ACM
SIGCOMM 2003 Conference, August 2003.
[14] L. Peterson, T. Anderson, D. Culler, and T. Roscoe. A
Blueprint for Introducing Disruptive Technology into the
Internet. In Proceedings of the HotNets-I, 2002.
[15] L. Peterson, S. Shenker, and J. Turner. Overcoming the
Internet Impasse through Virtualization. In Proceedings
of HotNets-III, November 2004.
[16] S. Savage, T. Anderson, A. Aggarwal, D. Becker,
N. Cardwell, A. Collins, E. Hoffman, J. Snell, A. Vahdat, G. Voelker, and J. Zahorjan. Detour: A Case for
Informed Internet Routing and Transport. IEEE Micro,
19(1):50–59, January 1999.
[17] L. Wang, V. Pai, and L. Peterson. The Effectiveness
of Request Redirection on CDN Robustness. In Proceedings of the 5th Symposium on Operating System Design and Implementatio (OSDI), Boston, MA, December
2002.
[18] W. Wang and S. Jamin. Statistics and Lessons Learned
from Video Broadcasting ACM SIGCOMM 2004.
http://warriors.eecs.umich.edu/tmesh/sigommstat/.
[19] B. Zhang, S. Jamin, and L. Zhang. Host Multicast: A
Framework for Delivering Multicast to End Users. In
Proceedings of the IEEE INFOCOM 2002 Conference,
June 2002.