Download Moving beyond TCP

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Telecommunications in Russia wikipedia , lookup

Long-tail traffic wikipedia , lookup

5G wikipedia , lookup

PSTN network topology wikipedia , lookup

IEEE 1355 wikipedia , lookup

Piggybacking (Internet access) wikipedia , lookup

Computer network wikipedia , lookup

H.323 wikipedia , lookup

Zero-configuration networking wikipedia , lookup

Telecommunications engineering wikipedia , lookup

Airborne Networking wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

TCP congestion control wikipedia , lookup

Windows Vista networking technologies wikipedia , lookup

Deep packet inspection wikipedia , lookup

Quality of service wikipedia , lookup

Packet switching wikipedia , lookup

History of telecommunication wikipedia , lookup

Communication protocol wikipedia , lookup

Telecommunication wikipedia , lookup

Internet protocol suite wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Transcript
Moving beyond TCP/IP
 Look at the title and discuss in pairs what you think the text will be about
Fred Goldstein and John Day for the Pouzin Society 4/2010
The triumph of the TCP/IP protocol suite in today’s market is nearly complete. A
monoculture of networking has emerged, based on Internet protocols originally developed in
the 1970s and popularized in the 1980s. With this level of acceptance and a near-universal use
of IP for purposes well beyond the original designers’ intent, one can imagine that TCP/IP
really represents the final stage in networking, from which all future solutions must slowly
evolve.
This belief, however popular, is not necessarily correct. TCP/IP was a very important stage of
network evolution. It was designed for the ARPANET, which the Department of Defense
built as a resource-sharing network to reduce the cost of its research. It showed promise and
conclusively demonstrated alternative forms of networking that had not previously been
shown at such a scale. But it was not set up to primarily do research on network technology
per se.
There was no such network; research on networking was incidental to its mission.
It was, if anything, too successful. Because the ARPANET and TCP/IP worked reasonably
well, was government-funded, and had no strong competition, it went into production too
early. Its technology was adopted by the later Internet, as well as by many private networks,
and it has been simply accepted as a given.
The Internet itself has been a huge popular success, in large part because of its low-priced
business model. It hasn’t been as successful for its providers; the ISP industry has never been
very profitable, and the early growth of the Internet was largely subsidized by the huge
infusion of overvalued equity during the 1997-2000 stock boom. IP has absorbed the glow
from the Internet’s halo. It is hard to distinguish between the Internet and its protocols. But
they are not the same thing. For the Internet to prosper in the long term, it needs to move
beyond TCP/IP.
The experiment was a success. Now is the time to analyze its lessons and start moving ahead.
Unfortunately, the Internet Engineering Task Force has not been learning these lessons. It has
been acting as a promotional body for TCP/IP itself. It confused commercial success with
technical excellence. It has tragically decided that the evolution of IP is to IP Version 6. This
decision, made in the early 1990s before the Internet was even in widespread commercial use,
has distracted the networking community. A different direction is needed.
TCP/IP was designed for a limited set of tasks
When the ARPANET began in 1969, it was designed to demonstrate the then-radical notion
of packet switching. The mere notion that a network could handle small blocks of data, rather
than create constant flows like a phone call, needed to be demonstrated. The original
ARPANET protocol was called NCP. It was, in today’s terms, connection-oriented: Before
data could be passed between two points, a connection had to be set up. What differed from
the connection oriented telephone network was that the actual network capacity was only
consumed on demand, not merely by having a connection present. But NCP was also a very
rigid protocol, designed to ensure reliability of transmission on a hop by hop basis. This may
have been a good idea given the networks of its day. The ARPANET backbone ran at 50
kilobits per second. This was very fast at the time!
The NCP ARPANET is not today’s Internet. It was closer to what the public packet-switched
networks that were developed in the 1970s called X.25. These networks were optimized for
terminal-to-host applications. It was a dead-end technology that dominated European markets
in the 1980s.
But it was a French researcher, Louis Pouzin, who saw the early ARPANET and had a
different idea about how to perform packet switching. He postulated that the switches in the
middle of the network didn’t have to keep track of connections; they just had to pass packets
as they arrived. Error correction and flow control could be handled at the edges of the
network. He designed and built the first connectionless network, CYCLADES, in 1972. He
also noted in a 1974 article that it was possible for packets to be passed between separate
networks, each with its own administration and policies. This idea was picked up later that
year by ARPANET researchers, along with connectionless networking. We now know it as
the Internet, the network of networks.
ARPANET researchers created a new set of protocols, with TCP for end-to-end error and
flow control and IP for the connectionless middle. They did not receive universal acceptance
even then. When submitted to the International Federation for Information Processing (IFIP)
for international standardization, alternatives closer to Pouzin’s work were found more
suitable. The latter were submitted to ISO for its pending OSI program. But the Department of
Defense continued with TCP/IP.
It was phased in and, by 1983’s “flag day”, completely replaced NCP. And around that time,
Berkeley released a free, open source implementation of TCP/IP. It included the key
application protocols of its day, too: FTP for file transfer, SMTP for email, and TELNET,
which was largely used for remote terminal login. While designed for Berkeley Unix (BSD),
it was adaptable to other systems too, and helped make TCP/IP popular. It worked, and while
the original code was rather sloppy, the price was right.
TCP/IP’s strength, as with its contemporaneous alternatives, was in dealing with bursty data
traffic. It scaled reasonably well, thanks to Moore’s Law, and with some rather critical
adaptations made in the mid-1980s, adapted to high-speed local area networks. It easily
handled new applications, including file and print services and the World Wide Web.
But it was not designed to replace every other network. Packet switching was designed to
complement, not replace, the telephone network. IP was not optimized to support streaming
media, such as voice, audio broadcasting, and video; it was designed to not be the telephone
network. Packet voice was tested over the ARPANET in the 1970s, so the idea is not new.
However, streaming over IP’s “best effort” model of capacity allocation doesn’t mix well
with data traffic. This was recognized by the late 1980s and streaming packet protocols, with
defined flows, were designed; the most successful was ST-II (see RFC 1190). But these didn’t
catch on. Instead, the rapid growth of capacity in the Internet backbone allowed streams to
just hitch a ride, working just well enough to catch on. It was a case where good enough was
the enemy of the best. The IP juggernaut was unstoppable.
A path forward
So if IP is so imperfect, can anything be done about it? Of course… but it’s not going to be
handled by incremental upgrades or missteps like IPv6. Instead, what John Day has done in
his Patterns in Network Architecture: A Return to Fundamentals is start afresh, taking into
account lessons learned in the 35 years of TCP/IP’s existence, as well as the lessons of OSI’s
failure and the lessons of other network technologies of the past few decades. He has made
some key observations that point to a new direction.
Day has been on the net since 1970, and participated in early work on TCP, TELNET, and
FTP, as well as the IFIP working group referenced earlier. He was, for some years, the
rapporteur of the OSI Reference Model. That is, he chaired the committee that was charged
with developing the famous 7-layer model which had itself been invented in the late 1970s.
Early on, the OSI upper layer group recognized that the model itself was fundamentally
wrong! Layers 5, 6 and 7 were really one layer, and did not work as separate ones. This was
known by 1983 and the protocols were refined to allow them to be handled together. But not
all implementers or textbook authors understood this, and attempts to build OSI-compliant
applications atop separate layer 5 and 6 protocol machines met with so much difficulty that
the whole OSI program collapsed. And indeed one of TCP/IP’s strengths was that it lacked
these superfluous layers. The IP community of the day was if anything practical, apparently
more so than it is today.
But fixed protocol stacks themselves turn out to be the problem! The pattern that Day noted is
that protocol functions tend to alternate as one goes up the stack. This alternation reflects a
repeating unit consistent with interprocess communication. The error and flow control
protocol breaks up into two functions, data transfer, which sends data forward, and data
transfer control, which provides feedback from the receiver. These functions both happened
in both X.25’s layer 2 and 3 protocols, and it also describes the relationship between IP (data
transfer) and TCP (data transfer control). Applications themselves have a common pattern
too.
Lower layers thus tend to do the same things as upper layers, but on a more local scale, with
more flows aggregated into them. At the application layer, a single instance frequently
communicates between two computers. At the other extreme, a large network backbone link
may carry thousands of processes, headed between many different computers, part of the way.
Recursive layers
This leads to the first principle of our proposed new network architecture: Layers are
recursive. The same protocol can be used repeatedly in a protocol stack, encapsulating each
layer in another instance of itself. There is thus no need for purpose-built protocols for each
layer. There is also not a fixed number of layers in the stack. The number of layers between
the application and the physical medium is variable; at any given point, there are simply as
many as needed, no more, no less. But any given implementation only needs to deal with
itself, the layer above it, and the layer below it. The actual depth of the stack is essentially
invisible.
Layers are not the same thing as protocols; more than one protocol can make up a layer.
Because the same group of protocols is used repeatedly, the implementation is simpler than
the TCP/IP stack. There’s no need for separate protocols for “layer 2”, “layer 3”, etc. Because
the layers recurse, and can scale to form a large internet, the protocol suite that supports the
concept from Patterns in Network Architecture (PNA) is called the Recursive Internetwork
Architecture (RINA).
The protocols that make up the basic RINA layer include the Data Transfer Protocol (DTP)
which relays the payload in the forward direction. Its contents includes addressing
information and protection information (a checksum and a time-to-live counter which detects
routing loops).
The Data Transfer Control Protocol (DTCP) performs the error and flow control functions,
sending feedback from destination to source. Note that DTP has the payload, which is visible
outside of the layer, while DTCP operates entirely within the black box. A layer also has a
management protocol.
Another observation that Day made is that networking is just interprocess communications
(IPC), a standard function on modern computer operating systems. IPC within a single
computer is quite simple; it could just take the form of memory shared between two
processes. IPC between computers requires additional mechanisms to deal with issues such as
reliability of communications and synchronization. That’s where network protocols come in.
Are the following statements true, false or not give according to the text?
1. TCP/IP is the last word in networking. Any subsequent networking must be based on
it.
2. TCP/IP was put in place to help us learn about how networks work.
3. The ISP industry is making less money every year.
4. The NCP is not the same as a telephone network because its space is only filled when
used not, whereas in the latter case the space is physically present and thus always
used.
5. Although the department of defense used TCP/IP it was not widely accepted
elsewhere.
6. Packet switching was made to work alongside the phone network. It was not made as
an alternative to it.
7. The function of a protocol remains the same through the earlier layers, then switchers
at it nears the top.
8. RINA is not as straightforward a system as TCP/IP because it uses the same protocols
more than once in different layers.
Find words in the text that mean
1.
2.
3.
4.
5.
6.
7.
8.
9.
In itself
A ring of light around an object
Make money and succeed
Fixed – the opposite of flexible
To suggest a theory
badly-though out or lazily designed
An unstoppable force
The part of the network that all the other sections are connected to
To come round again repeatedly