Monday, September 1, 2008

A History and Evaluation of System R

Seen today as commonplace and ubiquitous in network engineering, features and functionality are consistently placed at the end-hosts instead of being implemented in the network architecture itself. Although this seems appropriate at first glance, justifying the end-to-end argument is non-trivial and requires careful consideration. The argument itself is stated briefly as such: Any feature between end-hosts must always depend, at least in part, on the end-hosts themselves. Thus, the communication system cannot completely implement any end-host feature. To justify this argument the paper delves into a classic example of implementing reliable communication, either in the network architecture itself or in the end-hosts. If reliability were a network feature, the end-hosts must still do their own error checking since the file system, file transfer program, or the buffering system may have a failure. Thus, the burden on the end-hosts is not lifted. It is noted, however, that an appropriate failure rate is still necessary to ensure that an exponential number of retried packets don't completely flood the network. The author, in fact, note that it's probably sufficient to have an error rate less than that of the application level.

Another reason for not implementing reliability at the network level is simply because some applications are not going to need it, or even want it. By placing the burden of implementation to the higher levels, only the applications that need it can have it. This reasoning applies not only to reliable communication, but also data acknowledgments. For some applications this is completely unimportant (i.e. voice communication). Additionally simple acknowledgments may be useless to the end-hosts, as they are only interested in knowing whether or not the receiver acted appropriately on the data. In this case, the receiver would be sending it's own acknowledgments as well. The paper also delves into the area of encryption, considering that if this were a network feature then authenticity must still be checked by the application and the network system must then be completely trusted by those that use it. By implementing encryption on the end-hosts, they do nearly the same work and don't have to worry about trusting the routers. The paper does, interestingly, note that one good reason for doing this may be to prevent end-hosts from transmitting freely unencrypted information that they shouldn't. The paper goes on to apply similar arguments to duplicate message suppression, FIFO message delivery, and transaction management.

Some good background information would actually be "The Design Philosophy of the DARPA Internet Protocols", which goes into a more anecdotal analysis of why the network architecture turned out to be so basic, in contrast to this paper's style of formal reasoning. For instance, that part of the reason for leaving greater functionality to end-hosts was simply that it was difficult to acheive consistency and correctness between a group of interconnected networks that were all forced to implement anything more than the very basics. Some interesting dicussion topics may include what problems we've run into as a result of pushing features more and more on to network programmers. How difficult has it been to develop features on the end-hosts (such as reliable file transfer) and what mistakes have we made along the way?

The paper's main focus was to give straightfoward and logical arguments to support the end-host argument, which I believe it did quite successfully and very clearly. However, in the design of something as complex as the internet it may be more convincing to give more examples of systems that were feature rich in the network architecture and experience problems in supporting a variety of communications. Of course, such an example may not exist, but if it did, it would lend incredible support to the authors' argumenents.

The paper should be kept in the syllabus because its reasoning is clear, straightforward, and most importantly, convincing. Although what the paper actually says is repeated in other papers, it may be useful to have this paper as a condenced set of arguments for the end-to-end argument.

Because of the paper's clear reasoning, few issues are left to future research, aside from possibly thowing the end-to-end argument out the window and seeing how far you can get in "beating" the internet in terms of the 7 goals listed in the previous paper discussed.

The implications of the work are that we can rest easy at night, knowing that in the last 20 years we've made the right engineering decisions when we designed the Internet.

No comments: