Showing posts with label Liu. Show all posts
Showing posts with label Liu. Show all posts
Tuesday, December 1, 2009
Skilled in the Art of Being Idle: Reducing Energy Waste in Networked Systems
S. Nedevschi, J. Chandrashekar, J. Liu, B. Nordman, S. Ratnasamy, N. Taft, "Skilled in the Art of Being Idle: Reducing Energy Waste in Networked Systems," NSDI'09, (April 2009).
One line summary: This paper examines the value of using proxies to handle idle-time traffic for sleeping hosts with the goal of reducing wasted energy consumption in networked end-systems; it does this by analyzing and classifying traffic to see what can be ignored or automatically handled and by examining several potential proxy designs.
Summary
This paper examines the problem of reducing wasted energy consumption in powered-on but idle networked end-systems such as desktops in home and office environments. It discusses various solutions, and in particular examines the value of using a proxy to handle idle-time traffic on behalf of sleeping hosts. The main problem is that while vendors have built in hardware support for sleep (S-states) to reduce power consumption while idle, surveys of office buildings indicate that the vast majority of machines are fully on while idle, instead of taking advantage of these sleep states. One reason for this is that a sleeping machine loses its network presence i.e. it cannot send or receive network messages, and another reason is that users or administers occasionally want to be able to schedule tasks to run during these idle times. Both of these reasons cause users to not make use of sleep states. This paper thus tries to answer the following questions: (1) Is the problem worth solving? (2) What network traffic do idle machines see? (3) What is the design space for a proxy? (4) What implications does proxying have for future protocol and system design.
To begin to answer these questions, the authors first collect network and user-level activity traces from 250 client machines belonging to Intel employees and attempt to classify the traffic. They first classify each packet as being either broadcast, multicast, or unicast, then whether it is incoming or outgoing. They find outgoing traffic tends to be dominated by unicast while incoming traffic is made of significant proportions of all three. They estimate the potential for sleep in four scenarios: (a) ignore broadcast and wake for the rest, (b) ignore multicast and wake for the rest, (c) ignore both broadcast and multicast, and (d) wake for all packets. They find that broadcast and multicast are mainly responsible for reducing the amount of potential sleep time and that doing away with just one of broadcast or multicast is not effective. The authors next classify the traffic based on protocol type, and evaluate each protocol on two metrics: total volume of traffic, and something the authors call half-sleep time. A high half-sleep time means that protocol’s packets could be handled by waking the machine up, whereas a low half-sleep time means that to achieve useful amounts of potential sleep time the proxy would have to handle them. They find that the bulk of broadcast traffic is for address resolution and service discovery, and a lot of other broadcast traffic is from router-specific protocols. Broadcast traffic allows for very little sleep in the office, but significantly more in the home. A proxy could easily handle most of these broadcast protocols. Multicast traffic is mostly caused by router protocols and is often absent or extremely reduced in homes as compared to offices. All router traffic is ignorable. From their analysis of unicast traffic, they speculate that it might be possible to ignore or eliminate much of unicast traffic. Finally, they classify traffic into one of the following three categories regarding the need to proxy that traffic: don’t wake, don’t ignore, and policy-dependent, and into one of the following three categories regarding the difficulty in proxying that traffic: ignorable/drop, handle via mechanical responses, and require specialized processing.
Next, they present four proxy designs. The first ignores all traffic classified as ignorable and wakes the host for the rest. The second ignores all traffic classified as ignorable, responds to traffic listed as capable of being handled by mechanical responses, and wakes the machine for the rest. The third does the same as the second except that it wakes up for traffic belonging to a certain set and drops any other incoming traffic. Lastly, the fourth does the same as the third except that it also wakes up for a certain set of scheduled tasks. They find that the simplest proxy, proxy 1, is inadequate for office environments and nearly inadequate for home environments, but that proxy 3 achieves a good amount of sleep time in all scenarios – more than 70% of the idle time. They also find that the effectiveness of proxy 2 depends a great deal on the environment. Given this, the best trade-off between complexity of design and power savings depends on the environment. Also, the authors also note that since scheduled wake-ups are infrequent, the impact on sleep is minimal, so proxy 4 performs practically the same as proxy 3. Finally, they offer a basic proxy architecture that could serve as a framework to building the different proxies designs they considered, and to demonstrate the feasibility of building a proxy, they implemented a simple proxy prototype in Click. The authors end by speculating about how systems could be redesigned to make them more power-aware, thereby simplifying the implementation of proxies, making proxies more effective, or eliminating the need for proxies altogether.
Critique
One thing I really liked about this paper is how the authors analyzed and classified network traffic before considering proxy design. In retrospect it seems absolutely necessary to guiding the design of a proxy. It was also just informative in general to look at the traffic traces from the perspective of which packets are ignorable, which can be handled automatically, and which are actually “important”. I also liked how they examined several points in the proxy design space and compared them. Overall I thought this was a very thoughtful and well organized paper and I think it should stay in the syllabus.
Labels:
Chandrashekar,
enterprise networks,
Liu,
Nedevschi,
Nordman,
power consumption,
Ratnasamy,
Taft
A Reliable Multicast Framework for Light-weight Sessions and Application Level Framing
S. Floyd, V. Jacobson, S. McCanne, C-G Liu, L. Zhang, "A Reliable Multicast Framework for Light-weight Sessions and Application Level Framing," ACM SIGCOMM Conference, (August 1995).
One line summary: This paper describes Scalable Reliable Multicast, a minimal framework from which applications designers can build multicast functionality suitable to their application; it also describes some analytical results pertaining to request/repair algorithms in multicast, along with several simulations.
Summary
This paper describes a reliable multicast framework called Scalable Reliable Multicast (SRM). SRM builds off of the principles of Application Level Framing (ALF), which “explicitly includes an application’s semantics in the design of that application’s protocol’, and Light-Weight Sessions (LWS), which centers on a “light-weight rendezvous mechanism based on the IP multicast distribution model” with receiver-based adaption. Thus, the SRM is designed to meet only the minimal definition of reliable multicast so as to not force on applications unnecessary overhead for functionality that they do not need. It is also designed to adhere to the core principles of TCP/IP in that it only requires the basic IP delivery model and dynamically adjusts control parameters based on observed performance, much like TCP. The authors argue that receiver-based reliability is more appropriate for SRM than sender-based reliability because the fate-sharing-based coupling of unicast does not generalize well to multicast (due to such factors as the ACK implosion effect and the need for the sender to maintain state about each of the receivers in the receiver set), and because the vocabulary of unicast conventions migrates poorly to multicast. SRM attempts to serve as a skeleton common to scalable, reliable multicast applications that supply the framework with details such as a namespace, policies and mechanisms for apportioning bandwidth, etc.
The authors go on to describe a network conferencing tool that provides a distributed whiteboard called wb, which builds off of SRM. In wb, users are members, each of whom has a globally unique identifier which is used to label pages they create and edit. Wb assumes that all data has a unique name, that the name always refers to the same data, that source-IDs are persistant, that IP multicast datagram delivery is available, and that all participants join the same multicast group. The paper describes wb’s instantiation of SRM. It then describes more generally request/repair algorithms for several simple topologies, including chains, stars, and bounded degree trees. In the context of these algorithms, it defines deterministic suppression of duplicate messages, and probabilistic suppression. They find, via simulations, that their algorithm that uses fixed timer parameters performs well in random or bounded degree trees when every node in the tree is a member of the multicast group. They use this to motivate the development of an adaptive algorithm for request/repair that adjust timer parameters as a function of delay as well as of duplicate requests or repairs in recent recovery exchanges. They demonstrate trade-offs between low delay and low number of duplicates.
Critique
I didn’t really like reading this paper. I don’t really like simulations, for one. Also, that the authors chose what they admit to be arbitrary values for some of their parameters in their algorithm annoys me. Nevertheless, however simplified the topologies in their analysis section, I would probably agree that it is good to have some mathematical results for many problems, in this case those pertaining to request/repair algorithms in multicast, in order to provide some intuition to people implementing actual systems and to help guide their choices. In that vein, I would be interested in knowing more about any work that really directly builds off of or uses their idea of a framework for scalable, reliable multicast and learn in exactly what way this particular paper was useful. On an unrelated note, it also very seriously annoyed me that the graphs in this paper lacked axis or data labels of any kind.
Subscribe to:
Posts (Atom)