Tuesday, September 15, 2009

Random Early Detection Gateways for Congestion Avoidance


S. Floyd, V. Jacobson, "Random Early Detection Gateways for Congestion Avoidance," IEEE/ACM Transactions on Networking, (August 1993).

One line summary: This paper presents an algorithm for congestion avoidance called Random Early Detection (RED) that drops packets probabilistically when the computed average queue length is between a minimum and maximum threshold in order to avoid congestion.

Summary

This paper presents an algorithm for congestion avoidance called Random Early Detection (RED). The authors argue that the most effective congestion detection can occur at gateways as opposed to the endpoints. In the current Internet, TCP only detects congestion when a packet has been dropped, meaning the queue at the gateway is full. Large queues at gateways are especially undesirable in networks with large delay-bandwidth product connections as it can significantly increase the average delay in the network. The authors also assume that it is useful to have queues at gateways in which traffic is multiplexed together with FIFO scheduling, and that per-connection gateway mechanisms such as those used in Fair Queueing should only be used when not having per-flow mechanisms is clearly inadequate. As RED does not use per-connection mechanisms, it is intended for networks where transport protocols are cooperative and respond to congestion indications, although it can be used to control congestion even when sources are not cooperative. In addition to the goals of keeping average queue size low and detecting congestion, RED places emphasis on avoiding global synchronization of TCP connections and avoiding bias against bursty traffic.

The RED algorithm works by first calculating the average queue size using a low-pass filter with an exponential weighted moving average. RED defines two thresholds, minimum and maximum. When the average queue size is below the minimum threshold, no packets are dropped. When it is above the maximum threshold, all packets are dropped. When it is between the two thresholds, each packet from the connections in the queue may be dropped with a probability that is a function of the average queue size, and this probability uniformly distributed and is proportional to that connection’s share of the bandwidth at the gateway. Later in the paper, the authors discuss setting the parameters for the algorithm. The weight in calculating the average should be selected to be not so low that it responds too slowly to changes in the actual queue size but not so high that the average fluctuates too wildly in response to bursty traffic. The minimum threshold shouldn’t be too low if traffic is fairly bursty. The maximum threshold depends on how high of an average queue length is tolerable. A rule of thumb is that the difference in the threshold values should be larger than the typical queue length increase in one RTT, in order to avoid global synchronization.

The authors state that RED meets the following goals: congestion avoidance, congestion detection at the same time scale as connections can respond to congestion, no global synchronization, simplicity, maximizing the throughput-to-delay ratio, no bias against bursty traffic (which the authors somewhat misleadingly term “fairness”), and appropriateness for a wide range of network environments. To show this they conduct a number of simulation experiments that compare RED with Drop Tail and Random Drop gateways. They also show how RED gateways can be used to identify connections using more than their fair share of bandwidth by noting which connections are having more of their packets dropped, as the probability of a connection having its packets dropped is assumed to be equal to that connection’s fraction of the link bandwidth through the gateway. Lastly, they provide an efficient implementation of the RED algorithm.

Critique

I didn’t really like this paper. I found it interesting that they argued for FIFO queueing, and I’m not sure I found their arguments as compelling after reading the other congestion avoidance papers we read. I also found their talk about their algorithm being “fair” rather misleading, since they don’t mean fair in the accepted sense. I think they gloss over the issue of true fairness as it was talked about in the Fair Queueing paper a little bit too much. Also, proper setting of the parameters of their algorithm seems to be important, but despite their rough guidelines it may not always be clear what the proper setting is. This seems like an undesirable property of their algorithm. It does seem like a step up from Drop Tail and a smaller step up from Random Drop, I will give them that. However, as I think we mentioned in class, I don’t think that this has led it to be widely implemented anyway. On more of a nit-picky note, I thought that a lot of their phrasing was awkward and their graphs were absolute crap (difficult to read, didn’t even have legends). I didn’t think the paper needed to be 20 pages long.

No comments:

Post a Comment