Thursday, October 15, 2009

ExOR: Opportunistic Multi-Hop Routing for Wireless Networks


S. Biswas, R. Morris, "ExOR: Opportunistic Multi-Hop Routing for Wireless Networks," ACM SIGCOMM Conference, (August 2005).

One line summary: This paper describes a wireless routing protocol called ExOR that is inspired by cooperative diversity schemes and that achieves significant improvement in throughput over traditional routing protocols.

Summary

This paper describes a wireless routing and MAC protocol called ExOR. In contrast to traditional routing protocols that choose a sequence of nodes along a path to route through and then forward packets through each node on that path in order, ExOR is inspired by cooperative diversity routing schemes, and broadcasts each packet to a list of candidate forwarders from which the best node to receive the packets forwards them in turn. The authors demonstrate that ExOR substantial increase in throughput over traditional routing protocols. One reason for this is that each transmission in ExOR may have a more chances of being received and forwarded. Another reason is that it takes advantage of transmissions that go unexpectedly far or that fall short. Four key design challenges for ExOR were (1) nodes must agree on which subset of them received each packet, (2) the closest node to the destination to receive a packet should be the one to forward it, (3) there is a penalty to using too many nodes as forwarders since the cost of agreement go up, and (4) simultaneous transmissions must be avoided to prevent collisions.

Briefly, the way ExOR works is as follows. The source batches packets destined to the same host and broadcasts the batch. Each packet contains a batch map that indicates the highest priority node to have received a packet. Each packet also contains a forwarder list of nodes that is in order of increasing cost of delivering a packet from that node to the destination. The cost metric used is similar to ETX. A node that receives a packet checks the forwarder list for itself. It checks the batch map in the packet and uses it to replace its local batch map if it indicates a higher priority node has received it. Nodes forward all remaining packets that have not been yet received by the destination and the nodes do this in the order in which they appear in the forwarder list. Nodes estimate the time they need to wait before sending or use a default value if they have no basis for guessing. Once the last node in the forwarder list has transmitted, the source starts the process again by broadcasting all packets not yet received by any node. The destination sends copies of its batch map to propagate information back to the sender about which packets were received. Nodes only continue transmitting until they receive indication that 90% of the batch has been received; the rest is then forwarded using traditional routing.

ExOR was evaluated on Roofnet. The authors measure the throughput obtained when transmitting 1MB using traditional routing and ExOR between each of 65 pairs of nodes. They compare the 25 highest and 25 lowest throughput pairs (with respect to traditional routing). ExOR outperforms traditional routing even over just one hop. They also show that ExOR had to retransmit packets on average half as many times as the traditional routing protocol did. They examine the performance of ExOR using various batch sizes and conclude the optimal size is likely between 10 and 100 packets. They use a simulator to study the effects of shared interference on the performance of ExOR and conclude that it only slightly hurts ExOR’s performance. Lastly they show that the throughput of ExOR exhibits less variance than that of traditional routing.

Critique

I liked the ExOR protocol because I thought it was unusual and clever and it did get much better throughput than traditional routing. I thought it was especially interesting that it got better throughput even over one hop. I also liked how in the evaluation section, the authors broke their results down to the 25 highest and lowest throughput pairs because it was informative and a nice way to think about the results.

All that said, here are some things I didn’t like about this paper. The authors assume that the reception at different nodes is independent and that there is a gradual falloff in delivery probability with distance, and the performance of ExOR in part depends upon these assumptions, but it seems these could be looked at more closely. I thought their simulation might have been too oversimplified to the extent that I’m not convinced it really gave much useful information. Another thing is that it seems like the nodes have to maintain a lot of state (delivery probability for all pairs, unless I’m misunderstanding something). Also, again as in some other papers from these authors, they ran their evaluations while other traffic was running over their Roofnet testbed. This just seems bad to me because I thought scientists were supposed to control as many variables as possible so that their experiments are repeatable. Another thing regarding their evaluations is that they had the traditional routing protocol send the entire file to the next hop before the next node starts sending. They claim this is more fair, and that may indeed be the case, but since I don’t think this is what traditional routing normally does it seems like they probably should have tried it both ways to verify that the modified approach indeed does give traditional routing better performance.

Overall though I liked this paper and I think it should stay in the syllabus.

Thursday, October 8, 2009

A Performance Comparison of Multi-Hop Wireless Ad Hoc Network Routing Protocols


J. Broch, D. Maltz, D. Johnson, Y-C Hu, J. Jetcheva, "A Performance Comparison of Multi-Hop Wireless Ad Hoc Network Routing Protocols," ACM Mobicom Conference, (October 1998).

One line summary: This paper uses a network simulator that the authors improved by adding relatively realistic physical and spatial models to compare four wireless ad hoc network protocols: DSDV, TORA, DSR, and AODV, that cover a wide range of different design choices.

Summary


This paper compares four wireless ad hoc network routing protocols using detailed simulations. The four protocols compared were (1) Destination-Sequenced Distance Vector using sequence number triggered updates (DSDV-SQ), (2) Temporally Ordered Routing Algorithm (TORA), (3) Dynamic Source Routing (DSR), and (4) Ad Hoc On-Demand Distance Vector using link layer feedback to detect link breakage (AODV-LL). DSDV-SQ is a hop-by-hop distance vector routing protocol that uses periodic broadcast updates and guarantees loop freedom. TORA is a distributed routing protocol that uses a link-reversal algorithm to provide on demand route discovery and runs on top of IMEP. DSR uses on-demand source routing and consists of Route Discovery and Route Maintenance phases. Lastly, AODV-LL is like a combination of DSR and DSDV because it has on-demand Route Discovery and Route Maintenance along with hop-by-hop routing and sequence numbers. These protocols were simulated using ns-2 at varying levels of node mobility and number of senders. The simulator was enhanced to allow for realistic physical layer and propagation modeling as well as node mobility. It used a random waypoint model to simulate mobility and constant bit rate (CBR) sources. The metrics by which the protocols were judged were (1) packet delivery ratio, (2) routing overhead, and (3) path optimality.

The results are presented for each metric. These results are for hosts that move at a speed of 20 m/s when not paused. In terms of the percentage of packets delivered, DSR and AODV-LL deliver between 95% and 100% of packets regardless of offered load and node mobility. DSDV-SQ drops to 70% packet delivery with constant node mobility, which the authors attribute to packets dropped on account of a stale routing table entry. TORA does well until the number of sources sending packets reaches 30 then experiences congestive collapse due to a positive feedback loop. In terms of routing overhead, DSR and AODV-LL have similar curves although AODV-LL has higher overhead when node mobility is near constant. The authors later note that if measured in bytes instead of packets, the overhead of DSR becomes much greater than AODV-LL. The overhead of TORA depends in part on node mobility and is much higher than any of the other protocols. DSDV-SQ has nearly constant overhead independent of node mobility or offered load. In terms of path optimality, both DSDV-SQ and DSR use near optimal routes regardless of node mobility, whereas TORA and AODV-LL use less optimal routes when node mobility is high. Other interesting observations the authors make are that the percentage of packets successfully delivered for broadcast packets is lower than for unicast packets. They also note that early in their experiments they found a serious integration problem with ARP; they found a workaround but note that this problem would have to be addressed in any real implementation running on top of ARP. The authors don’t really conclusively rank the protocols in the end but it is clear from the experiments that DSR is probably the best protocol, followed by AODV-LL and DSDV, with the relative ranking of these two being less clear due to tradeoffs. TORA is obviously the worst protocol in almost every respect.

Critique

Although I tend to think that real-world experiments are always preferable to simulations, I like that the authors improved the network simulator to include a spatial model to simulate host mobility and a relatively realistic physical layer and radio network interface model. I also appreciated the thoroughness of their simulations with respect to their clearly stated metrics. Their section describing their experimental results was considerably easier to follow than most papers and the way they laid out their graphs made it pretty easy to compare the results from the different protocols. I also like that they provided additional, more in-depth explanations for certain observations where they were warranted, for example, their explanation of the congestive collapse of TORA. Their section containing additional observations was nice too. For some reason I am having trouble questioning their assumptions and making criticisms of this paper (nothing immediately jumps out at me), but because they used a simulator and also implemented all the protocols themselves there are clearly going to be a lot of assumptions that underlie their results. I thought they did a pretty good job of clearly stating what all these assumptions were though, so at least they are there for readers to take into account. I liked this paper and I think it’s probably good to keep it in the syllabus.

A High-Throughput Path Metric for Multi-Hop Wireless Routing


D. De Couto, D. Aguayo, J. Bicket, R. Morris, "A High Throughput Path Metric for Multi-Hop Wireless Routing," ACM Mobicom Conference, (September 2003).


One line summary: This paper presented ETX, a new routing metric for wireless networks that attempts to maximize path throughput; this paper also explains why min hop count is a poor metric for use in wireless networks and performs an evaluation comparing min hop count with ETX.

Summary

This paper presented a new metric for routing in wireless networks called the expected transmission count (ETX). It compares ETX with the most commonly used metric: minimum hop count, which is implicitly based on the assumptions that links either work well or don’t work at all. The authors give several reasons why, as a result, routing protocols that use min hop count as their metric for selecting paths achieve poor performance in wireless networks. They further quantify these reasons via evaluation of the min hop count metric in a test bed. One reason min hop count performs poorly is that by minimizing the number of hops, it maximizes the distance traveled in each hop, which is likely to also minimize the signal strength and maximize the loss ratio. Min hop count performs well when the shortest path is also the fastest path, but that is frequently not the case in wireless networks. When there are a number of paths with the same minimum hop count, routing protocols often choose one at random, and this is unlikely to be the best choice. Another issue is that min hop count does not deal well with asymmetric links, which are common in wireless networks. Lastly, min hop count does not take into account the wide range of loss ratios for links in wireless networks.

Given these considerations, the authors state that ETX must be able to account for the wide range of link loss ratios, the existence of asymmetric links, and the interference between hops in a multi-hop path. ETX is designed to maximize throughput. The ETX of a path is the sum of the ETX of each link in the path. The ETX of a link is defined as one over the expected probability that a transmission over that link is successfully received and acknowledged. This probability is calculated using the delivery ratios in both forward and reverse directions over the link. Delivery ratios are calculated using probe packets. Five important characteristics of ETX according to the authors are (1) it is based on delivery ratios, (2) it detects and handles asymmetry, (3) it uses precise link loss ratio measurements, (4) it penalizes routes with more hops, and (5) it minimizes spectrum use. Some drawbacks of ETX are that it only makes sense for networks with link layer retransmissions, it assumes radios have a fixed transmit power, it is susceptible to problems due to MAC unfairness under high load, when the highest throughput path has more than three hops it might not choose that path, and lastly, it does not account for mobility.

ETX was evaluated by implementing two routing protocols, DSDV and DSR, to use it as the routing metric and compare with those same protocols using min hop count as the routing metric. Some conclusions they draw with respect to DSDV are that ETX performs better than min hop count especially when min hop count uses paths with asymmetric links, ETX incurs more overhead than min hop count, ETX overestimates the delivery ratio for large data packets and underestimates it for small ACKs, ETX outperforms DSDV with a handshaking scheme, and ETX performance improves when a certain delay-use modification is used. With respect to DSR, they found that link failure feedback allows DSR with min hop count to perform as well as DSR with ETX.

Critique

I didn’t like this paper as well as I liked the Roofnet paper. I don’t think that ETX turned out to be as impressive in the evaluation section as the authors made it sound in the initial sections, so that was kind of disappointing. In the Roofnet paper, they don’t actually use ETX directly, but use a more sophisticated metric called ETT. I thought the authors’ explanations in the beginning of the paper for why min hop count performs poorly in wireless networks were nice. I wasn’t as impressed with the rest of it. I thought it was interesting that DSR with min hop count and link failure notification allowed DSR to perform pretty much as well as DSR with ETX. I wonder how ETX would compare with other tougher competitors beyond just the naïve min hop count metric. ETX clearly has some unsatisfactory features, including the overhead of using it with DSDV and its tendency to misestimate the delivery ratio for packets that are much smaller or much larger than probe packets. Also, some other things about the evaluation section confused me. For instance, their explanation of why you shouldn’t compare runs was confusing. It seems that they used entirely different parameters (packet size, transmit power) in different runs according to the labels on the graphs, but then they say you shouldn’t compare them because the network conditions change over time, not because the parameters are different. Even though I am not very enthusiastic about this paper and ETX didn’t turn out to be as great as the authors imply at the beginning of the paper, there is probably still a lot of interesting things to learn from in here, things to do as well as things not to do. The authors clearly learned from this experience because they created an improved metric, ETT, for Roofnet.

Sunday, October 4, 2009

Architecture and Evaluation of an Unplanned 802.11b Mesh Network


J. Bicket, D. Aguayo, S. Biswas, R. Morris, "Architecture and Evaluation of an Unplanned 802.11b Mesh Network," ACM Mobicom Conference, (September 2005).

One line summary: This paper describes Roofnet, an unplanned wireless mesh network built with 37 nodes over an area of about 4 square km; this paper presents its design and implementation as well as an evaluation of the actual network constructed.

Summary

This paper discusses the design and evaluation of an unplanned community wireless mesh network called Roofnet. Community wireless networks are usually constructed in one of two ways: by constructing a planned multi-hop network with nodes in chosen locations and directional antennas or by operating hot-spot access points to which clients directly connect. Roofnet aims to combine the advantages of both approaches by employing the following design decisions: (1) unconstrained node placement, (2) omni-directional antennas, (3) multi-hop routing, and (4) optimization for routing in a slowly changing environment with many links of varying quality.

Roofnet consists of 37 nodes spread across about four square km. Node locations are neither random nor truly planned. Roofnet provides Internet access. The nodes are self-configuring. Roofnet uses its own set of IP addresses on top of the IP layer. Each node runs a DHCP server and provides NAT to the hosts connected to it. If a Roofnet node can connect directly to the Internet it acts as a gateway to the rest of Roofnet. At the time the paper was written Roofnet had four gateways. Roofnet uses its own routing protocol called Srcr. It uses source routing and attempts to use the highest throughput routes using Dijkstra’s algorithm. The routing metric used in lieu of exact information about the throughput of routes is the estimated transmission time (ETT), which is a prediction of the amount of time it would take a packet to traverse a route given each links’ transmit bit rate and the delivery probability at that bit rate. Nodes choose among the available 802.11b transmit bit rates using an algorithm called SampleRate, which attempts to send packets at the bit rate which will provide the most throughput.

Roofnet was evaluated using four sets of measurements: (1) multi-hop TCP, (2) single-hop TCP, (3) loss matrix, and (4) multi-hop density. Some simulation was also used. A brief overview of their findings follows. Roofnet’s one-hop routes have a speed consistent with the 5.5 Mb transmission rate but longer routes are slower. That is, throughput decreases with each hop faster than might be expected. The authors speculate that this is due collisions of concurrent transmissions. The maximum number of hops to a gateway is 5. Roofnet’s routing algorithm Srcr prefers short, fast hops. The median delivery probability of the links used by Srcr is 80%. Roofnet approaches all-pairs connectivity with more than 20 nodes and as the number of nodes increases, throughput increases. The majority of Roofnet nodes route through more than two neighbors for their first hop, suggesting the network makes good use of the mesh topology. The best few links in Roofnet contribute considerably to overall throughput but dozens of nodes must be eliminated before throughput drops by half. Fast links are more important for throughput but long and fast links are more important for connectivity. In Roofnet, if only single-hop routing is used, five gateways are needed to cover all nodes. For five or fewer gateways, randomly chosen multi-hop gateways are better than randomly chosen single-hop gateways, but for larger number of gateways carefully chosen single-hop ones are better. The authors also examined one 24-hour period of use of the Roofnet network by their volunteer users, monitoring one gateway. They found that 94% of the traffic through the gateway was data and the rest was control packets. 48% of the traffic was from nodes one-hop away and 36% from nodes two-hops away and the rest were three-hops away or more. Almost all of the packets through the gateway were TCP. Web requests made up 7% of the data transferred and BitTorrent made up 30%, although 67% of the connections were web connections and only 3% were BitTorrent.

Critique

I thought this paper was very interesting. I thought the results of their evaluations were so in particular. However, I don’t think it is good that they allowed users to use Roofnet while they were doing their experiments. It seems like this would definitely have some affect but one that is difficult to quantify and explain. This seems especially true considering they had not an insignificant amount of BitTorrent traffic on their network. Also I think they should have explained how they did their simulations a bit more. I wasn’t entirely clear on that point; for instance, in these simulations, did they still let SampleRate determine what at what rate to transmit? Since SampleRate adjusts over time, I am not sure what this means for their simulations. Despite these things I still liked their evaluations. I thought they selected interesting aspects of Roofnet to examine and their results were presented in a very nice and clear way.

I think that the design of Roofnet itself is pretty cool. Also I often tend to prefer papers that talk about things actually built as opposed to just simulated. Their routing algorithm is clever although they point out that it may not be scalable. One thing that confused me is that in the section on addressing, they explain that each Roofnet node assigns itself an address from an unused class-A address block. They say these addresses are only meaningful within Roofnet and are not globally routable, so I wonder if they need to be unused addresses. If they do, that’s obviously a huge constraint, and if they don’t, I’m not sure why they state they are unused but don’t explain that they don’t need to be unused. I may be misunderstanding something very basic there, but if not, they may be leaving something out.

In summary, this paper was fun to read and I think it should stay in the syllabus.

Modeling Wireless Links for Transport Protocols


Andrei Gurtov, Sally Floyd, "Modeling Wireless Links for Transport Protocols," ACM SIGCOMM Computer Communications Review, Volume 34, Number 2, (April 2004).

One line summary: This paper discusses modeling wireless links, including problems with current models and how the assumptions of a model can affect the evaluation of transport protocols using that model.

Summary

This paper discusses modeling wireless links especially for the purpose of evaluating transport protocols. The paper first briefly describes the three main types of wireless links: cellular, wireless LANs, and satellite links. It then discusses common topologies found with wireless links, as well as the typical kinds of traffic that run over them. It states that common performance metrics for wireless links include throughput, delay, fairness, dynamics, and goodput. The paper goes on to give four reasons why better models are needed, along with support examples for each reason: (1) some current models are not realistic, (2) some current models are realistic but explore only a small fraction of the parameter search space, (3) some current models are overly realistic, and (4) many models are not reproducible. The paper then describes specific characteristics of wireless links: error losses and corruption, delay variation, packet reordering, on-demand resource allocation, bandwidth variation, and asymmetry in bandwidth and latency. It also discusses the effect of queue management and node mobility on transport protocols. The paper then argues that it is not necessarily the case that transport protocols must adapt to wireless links or that wireless link layer protocols must accommodate all transport protocols but rather that designers of each should take into account the characteristics of the other and their interplay. The paper discusses three particular areas where it is not clear how link layer and transport layer protocols should interact: bit error detection and correction, packet reordering, delay variation, and cross-communication between layers.

Critique

I didn’t really like this paper. It’s not that I would disagree with anything the authors said. Many of the things they pointed out are well known and they say so. It is useful that they aggregated all this information and presented it in a nicely organized and logical way. It was somewhat informative. I guess I’m just surprised that it was published at a conference when they didn’t really implement anything or prove anything or even have that many results. I would more expect to find this in a textbook or something.

Thursday, October 1, 2009

A Comparison of Mechanisms for Improving TCP Performance over Wireless Links


H. Balakrishnan, V. Padmanabhan, S. Seshan, R. H. Katz, "A Comparison of Mechanisms for Improving TCP Performance over Wireless Links," IEEE/ACM Transactions on Networking, (December 1997).


One line summary: This paper compares several end-to-end, link layer, and split connection schemes for improving the performance of TCP over wireless links, and concludes that link layer schemes incorporating TCP awareness and selective acknowledgments, as well as end-to-end schemes using Explicit Loss Notification and selective acknowledgments, were the most promising.

Summary

This paper compares a number of schemes for improving TCP performance in wireless networks using an experimental testbed consisting of a wired and a wireless link. The approaches compared fall under three categories: end-to-end, link layer, and split connection. In the end-to-end category, six specific protocols were tested. These included the de facto TCP implementation, TCP Reno (labeled E2E), TCP New Reno (labeled E2E-NEWRENO), which improves upon the performance of TCP Reno by remaining in fast recovery after a partial acknowledgement to recover from losses at the rate of one packet per RTT, two TCP schemes using selective acknowledgments (SACKs), labeled E2E-SMART and E2E-IETF-SACK, and two schemes using Explicit Loss Notification (ELN), labeled E2E-ELN and E2E-ELN-RXMT. There were four link layer schemes tested. These included a base link layer algorithm, LL, a link layer algorithm that uses SACKs, labeled LL-SMART, and two additional schemes that add to the first two TCP awareness, labeled LL-TCP-AWARE and LL-SMART-TCP-AWARE. Two split connection schemes were tested, one labeled SPLIT modeled after I-TCP, and another that adds SACKs to the first, called SPLIT-SMART.

The authors perform simple initial experiments on the mechanisms in each category. Of the link layer schemes, the authors conclude that the ones that use knowledge of TCP semantics perform the best. Of the end-to-end protocols, the results suggest that TCP New Reno is better than TCP Reno and that a scheme that makes use of both SACKs and ELN would probably perform the best. Of the split-connection schemes, SPLIT-SMART is better, but the performance of these schemes is overall not as good as those in the other categories. Using the most promising schemes from their initial experiments, the authors then test these schemes’ reaction to burst errors and performance at different error rates. Overall, the authors drew four main conclusions: (1) of all the schemes investigated, a link layer protocol with TCP awareness and selective acknowledgements performs the best, (2) splitting the end-to-end is not necessary for good performance and violates the end-to-end semantics, (3) selective acknowledgements are especially useful over lossy links when losses occur in bursts, and (4) end-to-end schemes are not as good as link layer schemes but they are promising because they do not require any support from the link layer or intermediate nodes.

Critique

The comparisons and overviews of the different techniques for improving TCP over wireless were informative. One thing that is a bit questionable is that the authors themselves implemented each of the protocols. The reason I mention this is because in class I remember Prof. Katz telling us about a company that claimed to be able to distinguish between different implementations of TCP because they each had unique signatures of sorts in the way they operated. This suggests that two different implementations of the same protocol aren’t necessarily the same. It may have been more of a fair comparison to use the “standard implementation” of each protocol but if this was not possible in all cases then implementing them all is probably better anyway. On second thought, this method still works for the sake of comparing techniques, if not implementations, and you probably wouldn’t want the results to be implementation dependent anyway. This paper was the first time I had heard about split-connection protocols, and I have to say, they don’t seem like a very good idea at all, but that could have been because of the way it was presented and tested. It confused me that they still used TCP over the wireless link using the split-connection schemes. That didn’t seem very useful. One other thing that confused me is why at one point in a footnote, the paper states that they only measure “receiver throughput”, when at an earlier point in the paper they state they measure “throughput at the receiver”. I think that in this case they intended the same meaning by these two phrases but generally I would think that these two phrases don’t mean the same thing. I also found the charts not so great to read. Lastly, I think an interesting thing to study next would be a comparison of these protocols over more complex wireless topologies, as they mention in the section on future work.

MACAW: A Media Access Protocol for Wireless LANs


V. Bharghaven, A. Demers, S. Shenker, L. Zhang, "MACAW: A Media Access Protocol for Wireless LANs," ACM SIGCOMM Conference, (August 1994).

One line summary: This paper introduces a wireless media access protocol called MACAW that builds off of prior protocols; MACAW is based on the use of RTS, CTS, DS, ACK, and RRTS messages and backoff algorithms for when contention is detected.

Summary

This paper discusses modifications to a previous wireless access protocol called MACA to alleviate problems with performance and fairness. The result of these modifications is MACAW. MACAW is designed based on four key principles: (1) relevant media contention is at the receiver, not the sender, (2) congestion is location dependant and is not a homogonous phenomenon, (3) learning about congestion must be a collective enterprise among devices, and (4) synchronization information should be propagated so all devices can contend fairly and effectively. MACAW was designed and tested with consideration to collision but not capture or interference – it was designed to tolerate noise but mostly tested in a noise-free setting.

This paper first describes problems with the CSMA and MACA protocols. It then describes the ways in which MACA was improved to yield MACAW. The basic principles behind MACA are Request-To-Send (RTS) and Clear-To-Send (CTS) messages, along with a backoff algorithm. The first thing MACAW does is to adjust the backoff algorithm to use multiplicative-increase linear-decrease (MILD) as opposed to binary exponential backoff (BEB) to reduce oscillation in backoff times, and adds backoff copying to prevent starvation. MACAW then adds the notion of keeping separate queues for each stream sent as opposed to one queue for the sender as a whole to achieve a sort of intuitive fairness (not precisely defined here) across flows. It then introduces four changes to the RTS-CTS message exchange algorithm. These are (1) acknowledgment (ACK) messages to avoid transport layer retransmission backoffs, (2) Data-Sending (DS) messages to indicate the beginning of data transmission, (3) Request-for-Request-to-Send (RRTS) to allow a receiver to contend on behalf of a sender when the receiver was previously unable to respond to an RTS due to contention, and (4) a procedure for multicast. MACAW then again refines the backoff algorithm by making each station maintain a backoff counter for each stream that reflects congestion at both the sender and the receiver.

All evaluation of MACAW was done via simulation. The authors mention several design options they chose not to implement, including piggy-backed ACKs, NACKs, and the use of carrier-sense or intermediate options involving clean signals, and they also describe alternative mechanisms that are entirely different from MACAW, such as token-based schemes and polling or reservations. They admit there are problems that arise in certain situations that they have not solved and their solution in the multicast case is not good. They also admit that they don’t have a precise definition of fairness.

Critique

I liked this paper’s candidness about the shortcomings of their approach. I did not like that they performed all evaluations using simulations instead of attempting a real-world evaluation. It would have been nice if they could have implemented some of the design alternatives mentioned, such as NACKs. I would think that since they are only using simulations that this would have been relatively easy to do. I wish they would have said what their ideas were for dealing with the multicast problem better. I’m sure they had some ideas. One obvious idea is to have the sender send a special Multicast-Request-To-Send (MRTS) message to each of the multicast receivers in turn to avoid contention for the CTS messages from the receivers. The MRTS message could indicate to the receivers that they will have to wait some time before receiving the DS message and the data. I’m sure there are problems with this solution and it wouldn’t work for all multicast situations but it just would have been nice if they had talked about some of the potential solutions and their shortcomings at a little more depth. Overall I liked this paper though because it seems like it probably laid the groundwork for a lot of later wireless protocols and it was easy to read. Their explanations and examples were very nice and clean and they explained the evolution and reasoning behind their design well.