Friday, April 22, 2011

Internet Technology

Internet Technology

Definition of the Internet

On 24 October 1995, the Federal Networking Council, a group of representatives from the U.S. government agencies that support the science and technology use of the Internet by U.S. researchers, defined the Internet as follows:

"Internet" refers to the global information system that:
  • Is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons;
  • Is able to support communications using the Transmission Control Protocol/ Internet Protocol (TCP/IP) Suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols;
  • Provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein.
They define the Internet not so much as a physical network, but rather as any subnetwork that executes the Internet protocol suite and related services. One of the great strengths of the Internet is that it spans highly diverse physical link technologies: twisted pair, coax, optical fiber, microwave, radio frequency, satellite, and so on.

Strengths and Weaknesses of Internet Technology

Strengths of Internet Technology

A key underlying assumption of the Internet is that end nodes are intelligent and have the ability to execute the TCP/IP protocol stack. Until recently, this implied a fairly complex (and expensive) end device, such as a laptop computer. Today’s personal digital assistants, costing a few hundred dollars, are now sufficiently powerful to execute these protocols. While existing telephone handsets are quite a bit dumber (and less expensive) than this, it should be noted that cellular telephones do possess embedded microprocessors and have the capability to run more sophisticated software than is currently the case for the telephone network.

Several of the key design principles that underlie Internet technology come from its origin as a proposed architecture for a survivable command and control system in the event of a nuclear war. The Internet achieves its robust communications through packet switching and store-and-forward routing. It is not necessary to create a circuit between end points before communications can commence. Information is sent in small units, packets, which may be routed differently from each other, and which may arrive at their destination out of order. This is sometimes called a datagram service (TCP, a transport layer protocol, insures that data arrives without errors and in the proper sequence). The routing infrastructure gracefully adapts to the addition or loss of nodes. But the network does assume that all routers will cooperate with each other to insure that the packets eventually reach their destination.

The first design principle is that there is no state in the network, in particular, there is no connection state in the switches. This has the positive effect of making the network highly robust. A switch can fail, and since it contained no critical state, the network can adapt to the loss by rerouting the packet stream around the lost switch.

A variation on this is called the end-to-end principle. It is better to control the connection between two points in the network from the ends rather than build in support on a hop-by-hop basis. The standard example of the end-to-end principle is error control, which can be done on a link by link basis, but still must be done on an end-to-end basis to insure proper functioning of the connection.

A second principle is the Internet’s highly decentralized control. Routing information is distributed among the nodes, with no centralized node controlling critical functions such as routing. This helps enhance the network’s resilience to failure.

One the great successes of the Internet is its ability to operate over an extremely heterogeneous collection of access technologies, despite large variations in bandwidth, latency, and error behavior. The key advantage this provides is that the network needs to make few assumptions about the underlying link technologies.

Weaknesses of Internet Technology

The Internet also has some serious weaknesses. First, it provides no differential service. All packets are treated the same. Should the network become congested, arbitrary packets will be lost. There is no easy way to distinguish between important traffic and less important traffic, or real-time traffic that must get through versus best effort traffic that can try again later. Note that the existing protocols do have the ability to support priority packets, but these require cooperation that the priority bit is set only for the appropriate traffic flows.

Second, there are no control mechanisms for managing bottleneck links. This is related to the first weakness, in that the Internet has no obvious strategy for scheduling packets across the bottleneck. Recent work by Floyd and Jacobson has developed a scheme called class-based queuing, which does provide a mechanism for dealing with this problem across bottlenecks like the "long, thin pipe" of the Internet between North America and Europe.

The third weakness lies in one of the Internet’s strengths: store-and-forward routing. The queuing nature of store-and-forward networks introduces variable delay in end-to-end performance, making it difficult to guarantee or even predict performance. While this is not a problem for best effort traffic, it does represent challenges for supporting real-time connections.

A fourth weakness arises from another one of the Internet’s strengths: decentralized control. In this case, it is very difficult to introduce new protocols or functions into the network, since it is difficult to upgrade all end nodes and switches. It will be interesting to see how rapidly IPv6, the latest generation of the routing protocols, is disseminated through the Internet. A partial solution is to retain backward compatibility with existing protocols, though this weighs down the new approaches with the old. A way to circumvent this limitation is to provide new services at the applications level, not inside the network. We will discuss the client-proxy-server architecture in Section "Next Generation Internet"

The last in our list of weaknesses comes from the Internet’s assumption of a cooperative routing infrastructure. Without a truly trusted infrastructure, the Internet as it now exists suffers from well known security problems. Several solutions are on the horizon. One is end-to-end encryption, which protects the information content of packets from interception. A second is policy-based routing, which limits packet flows to particularly trusted subnetworks. For example, using policy-based routing, it is possible to keep an packetized audio strem within the subnetworks of well-known ISPs and network providers, as opposed to arbitrary subnetworks.

Next Generation Internet

An alternative effort to build an "integrated services" network, that is, a network equally good at carrying delay sensitive transmissions as delay insensitive ones, based on Internet technology, has been called "Integrated Services Packet Network" (ISPN).

The next generation Internet has several attractive features. It will have ubiquitous support for multipoint-to-multipoint communications based on multicast protocols. Recall that multiparty calls were the number one requested additional functionality by cellular telephone subscribers. The next generation of the Internet protocols, IPv6, has built-in support for mobility and mobile route optimization.

To achieve good real-time performance, the Internet Engineering Community has developed a signaling protocol called RSVP (ReSerVation Protocol). This enables the creation of a weaker notion of connections and performance guarantees than the more rigid approach found in ATM networks. Performance is a promise rather than a guarantee, so it is still necessary to construct applications to adapt to changes in network performance. RSVP is grafted onto the Internet multicast protocols, which already require participants to explicitly join sessions. Since receivers initiate the signaling, this has nice scaling properties.

Another attractive aspect of the ISPN approach is its basis on soft state in the network. Performance promises and applications expectations are continuously exchanged and refreshed. Soft state makes the network very resilient to failure. The protocols have the ability to work around link and switch failures.

ISPN requires that a sophisticated network stack execute in the end node. Add to this the requirement to handle real-time data streams, primarily audio but also video. Fortunately, Moore’s Law tells us that this year’s microprocessor with be twice as fast in 18 months (or the same performance at half the price). Existing microprocessors have become powerful enough to perform real-time encode and decode of audio, and are achieving ever higher throughput rates for video.

Recall that in the traditional telephony infrastructure, hardware operates at 64 kbps for PCM encoding. The Internet’s Multicast Backbone (MBone) audio software supports encoding and decoding at many rates: 36 kbps Adaptive Delta Pulse Code Modulation (ADPCM), 17 kbps GSM (the full rate coder used in the GSM Cellular system), and 9 kbps Linear Predictive Coding (LPC). It is possible to achieve adequate video at rates from 28.8 kbps to 128 kbps using scalable codecs and layered video techniques.

A key element of the ISPN’s approach to providing reasonable performance is the Real Time Protocol (RTP) built on top of the underlying routing protocols. RTP supports application level framing. This places the responsibility onto applications to adapt to the performance capabilities of the network. The protocol includes a control protocol component that reports to the sender received bandwidth at the receiver. Thus, if the network cannot support the sender’s transmission rate, the sender reacts by sending at a lower rate. The protocol has the capability to gently probe the network at regular intervals in order to increase the sending rate when the network can support it.

This approach stands in stark contrast to the ATM model. The latter has a more static view of performance in terms of guarantees. Guarantees simplify the applications since they need not be written to be adaptive. But it places the onus on the network to police the behavior of the traffic so that the guarantees can be achieved. This puts more state into the network, which now requires set-up before use and which makes the network sensitive to failures.

A second critical advantage of the ISPN architecture is the ease with which new services can be introduced into the network, using the so-called "proxy architecture." Proxies are software intermediaries that provide useful services on behalf of clients while shielding servers from client heterogeneity. For example, consider a multipoint videoconference involving all but one participant who are connected by high speed links. Rather then default the lowest common denominator resolution and frame rate of the poorly connected node, a proxy can customize the stream for the available bandwidth of the bottleneck link.

It should be noted that there has recently been some controversy within the Internet Engineering community about whether there is a need for RSVP-based reservations and control signaling at all. One way to look at the Internet’s solution for improving performance is simply to add more bandwidth to the network, either by using faster link technology or by increasing the number of switches in the network. (I have already argued that the telephone wastes bandwidth in its 64 kbps voice encoding, though it carefully manages the total available bandwidth through call admissions procedures). This is not such a farfetched idea. With the arrival of widespread fiber optic infrastructure, the amount of worldwide bandwidth has increased enormously. This is one of the reasons that the long distance phone service has rapidly become a commodity business (and this why AT&T has such a low price-to-earnings ratio on the stock exchange!). For example, the amount of transpacific and transatlantic bandwidth is expected to increase by a factor of five between 1996 and 2000 as new fiber optic cables come on line. Nevertheless, local access bandwidth is quite limited, and we expect this to continue for some time to come despite the rollout of new technologies like ADSL.

Randy H. Katz
EECS Department
University of California, Berkeley
Berkeley, CA 94720-1776




Comments: