Showing posts with label Ratnasamy. Show all posts
Showing posts with label Ratnasamy. Show all posts
Tuesday, December 1, 2009
Not-a-Bot: Improving Service Availability in the Face of Botnet Attacks
R. Gummadi, H. Balakrishnan, P. Maniatis, S. Ratnasamy, "Not-a-Bot: Improving Service Availability in the Face of Botnet Attacks," NSDI'09, (April 2009).
One line summary: This paper presents a system called Not-A-Bot (NAB) that distinguishes human-generated traffic from bot-generated traffic by attesting human-generated traffic at the clients that is then verified at the servers in order to mitigate such problems as spam, DDoS attacks, and click-fraud.
Summary
This paper presents a system called Not-A-Bot (NAB) for identifying human-generated web traffic as distinguished from bot-generated web traffic. The motivation for this is that bots are responsible for a large amount of spam, distributed denial-of-service (DDoS) attacks, and click fraud, so being able to determine if an email or request is human-generated as opposed to bot-generated would help mitigate these problems. NAB consists of an attester and a verifier. When an attestation is requested by an application for a particular request or email, the attester determines if that request or email was indeed generated by a human, and if so, it attaches a signed statement to it that verifies that it was sent by a human. The verifier runs on the server and checks whether the request or email has an attestation and if it is valid. If it does, the application may choose, for example, to prioritize the request, or increase the score of an email so it is more likely to get through a spam filter. Otherwise it may consider the request or email as more likely to have come from a bot.
NAB assumes that applications and the OS are untrusted and so relies on a Trusted Platform Model (TPM) to load the attester code to ensure that it is trusted. As mentioned, the NAB attester decides to grant an attestation if it determines that a human generated the associated request or email. It does this by guessing, using as a heuristic how recently before an attestation request the last keyboard or mouse activity was observed. If the attestation was requested within a certain amount of time since the last keyboard or mouse activity, the attester grants the attestation. An attestation is non-transferrable and is bound to the content of the request it is generated for. An attestation is over the entire content of the application-specific payload and is responder-specific, and where appropriate, challenger-specific. The mechanism for attesting web requests and email in the common case is straightforward. The only complicated case is script-generated email, which requires deferred attestations. The verifier is straightforward and as metioned, implements an application-specific policy. The authors provide several example policies for spam, DDoS, and click-fraud mitigation.
The authors next describe their evaluation of NAB. They evaluate the attester with respect to TCB size, CPU requirements, and application changes. They evaluate the verifier with repect to the extent to which it mitigates attacks and the rate at which it can verify attestations. They find that the attester is 500 SLOC out of 30,000 for the TCB total, that the worst-case latency for generating an attestation is 10 ms on a 2 GHz Core 2 processor, and that modifications to two programs to include attestations required less than 250 SLOC each. For the verifier, they find that the amount of spam can be reduced by 92% with no false positives, that it can reduce the peak processing load seen at mail servers, that it can filter out 89% of bot-generated DDoS activity while not filtering out human-generated requests, and that it can identify click-fraud activity with more than 87% accuracy without filtering out human-generated clicks.
Critique
I didn’t really think that much of this paper. One criticism is that though adding NAB to applications wouldn’t be technically difficult, as the authors explain, you would still have to make sure a lot of applications (i.e. email clients, web browsers, servers, etc.) did include it and then make sure a lot of hosts ran the versions of the applications that included NAB, because it seems it wouldn’t be nearly as useful if not all the clients used it. In their evaluation, all of the client programs ran NAB, but if that weren’t the case it would be less effective. Another criticism or perhaps point of confusion is with regard to their deferred attestations, which are supposed to be used for script-generated emails. I don’t see why attackers couldn’t leverage this to generate attestations for their own spam emails or whatever else they wanted. Another criticism is that with their simple heuristic (guessing that an activity is human-generated if it is within a certain amount of time of keyboard or mouse activity), bots can still generate attestations for its own traffic by, as the authors say, harvesting human activity. This is probably still sufficient for generating large amounts of spam and such. They would probably be better off using the strategy of using specific mouse or keyboard activity to decide whether or not to grant an attestation request, but in the paper they claim that strategy is too complex to implement.
Labels:
Balakrishnan,
botnets,
Gummadi,
Maniatis,
network security,
Ratnasamy
Skilled in the Art of Being Idle: Reducing Energy Waste in Networked Systems
S. Nedevschi, J. Chandrashekar, J. Liu, B. Nordman, S. Ratnasamy, N. Taft, "Skilled in the Art of Being Idle: Reducing Energy Waste in Networked Systems," NSDI'09, (April 2009).
One line summary: This paper examines the value of using proxies to handle idle-time traffic for sleeping hosts with the goal of reducing wasted energy consumption in networked end-systems; it does this by analyzing and classifying traffic to see what can be ignored or automatically handled and by examining several potential proxy designs.
Summary
This paper examines the problem of reducing wasted energy consumption in powered-on but idle networked end-systems such as desktops in home and office environments. It discusses various solutions, and in particular examines the value of using a proxy to handle idle-time traffic on behalf of sleeping hosts. The main problem is that while vendors have built in hardware support for sleep (S-states) to reduce power consumption while idle, surveys of office buildings indicate that the vast majority of machines are fully on while idle, instead of taking advantage of these sleep states. One reason for this is that a sleeping machine loses its network presence i.e. it cannot send or receive network messages, and another reason is that users or administers occasionally want to be able to schedule tasks to run during these idle times. Both of these reasons cause users to not make use of sleep states. This paper thus tries to answer the following questions: (1) Is the problem worth solving? (2) What network traffic do idle machines see? (3) What is the design space for a proxy? (4) What implications does proxying have for future protocol and system design.
To begin to answer these questions, the authors first collect network and user-level activity traces from 250 client machines belonging to Intel employees and attempt to classify the traffic. They first classify each packet as being either broadcast, multicast, or unicast, then whether it is incoming or outgoing. They find outgoing traffic tends to be dominated by unicast while incoming traffic is made of significant proportions of all three. They estimate the potential for sleep in four scenarios: (a) ignore broadcast and wake for the rest, (b) ignore multicast and wake for the rest, (c) ignore both broadcast and multicast, and (d) wake for all packets. They find that broadcast and multicast are mainly responsible for reducing the amount of potential sleep time and that doing away with just one of broadcast or multicast is not effective. The authors next classify the traffic based on protocol type, and evaluate each protocol on two metrics: total volume of traffic, and something the authors call half-sleep time. A high half-sleep time means that protocol’s packets could be handled by waking the machine up, whereas a low half-sleep time means that to achieve useful amounts of potential sleep time the proxy would have to handle them. They find that the bulk of broadcast traffic is for address resolution and service discovery, and a lot of other broadcast traffic is from router-specific protocols. Broadcast traffic allows for very little sleep in the office, but significantly more in the home. A proxy could easily handle most of these broadcast protocols. Multicast traffic is mostly caused by router protocols and is often absent or extremely reduced in homes as compared to offices. All router traffic is ignorable. From their analysis of unicast traffic, they speculate that it might be possible to ignore or eliminate much of unicast traffic. Finally, they classify traffic into one of the following three categories regarding the need to proxy that traffic: don’t wake, don’t ignore, and policy-dependent, and into one of the following three categories regarding the difficulty in proxying that traffic: ignorable/drop, handle via mechanical responses, and require specialized processing.
Next, they present four proxy designs. The first ignores all traffic classified as ignorable and wakes the host for the rest. The second ignores all traffic classified as ignorable, responds to traffic listed as capable of being handled by mechanical responses, and wakes the machine for the rest. The third does the same as the second except that it wakes up for traffic belonging to a certain set and drops any other incoming traffic. Lastly, the fourth does the same as the third except that it also wakes up for a certain set of scheduled tasks. They find that the simplest proxy, proxy 1, is inadequate for office environments and nearly inadequate for home environments, but that proxy 3 achieves a good amount of sleep time in all scenarios – more than 70% of the idle time. They also find that the effectiveness of proxy 2 depends a great deal on the environment. Given this, the best trade-off between complexity of design and power savings depends on the environment. Also, the authors also note that since scheduled wake-ups are infrequent, the impact on sleep is minimal, so proxy 4 performs practically the same as proxy 3. Finally, they offer a basic proxy architecture that could serve as a framework to building the different proxies designs they considered, and to demonstrate the feasibility of building a proxy, they implemented a simple proxy prototype in Click. The authors end by speculating about how systems could be redesigned to make them more power-aware, thereby simplifying the implementation of proxies, making proxies more effective, or eliminating the need for proxies altogether.
Critique
One thing I really liked about this paper is how the authors analyzed and classified network traffic before considering proxy design. In retrospect it seems absolutely necessary to guiding the design of a proxy. It was also just informative in general to look at the traffic traces from the perspective of which packets are ignorable, which can be handled automatically, and which are actually “important”. I also liked how they examined several points in the proxy design space and compared them. Overall I thought this was a very thoughtful and well organized paper and I think it should stay in the syllabus.
Labels:
Chandrashekar,
enterprise networks,
Liu,
Nedevschi,
Nordman,
power consumption,
Ratnasamy,
Taft
Subscribe to:
Posts (Atom)