CS 736 Final Project Poster Session


Monday, December 20, 2004
10:30 am - Noon pm
2310 Computer Sciences

1. Fuzz Again: An Empirical Study of the Robustness of UNIX Utilities, X-Windows and Mac OS X Applications Patrick Davidson, Matthew Farrellee
  Abstract: Fuzz testing is the process of sending randomly generated input data to programs in order to test their reliability. Previous studies tested command-line Unix programs and GUI applications for X-Windows and Windows NT. We extend the previous work by testing MacOS X GUI applications. We also update previous results by running command-line tests on recent versions of Linux and MacOS X, and X-Windows tests on recent versions of Linux. In addition to a standard RedHat 9 Linux installation, we also test the University of Wisconsin-Madison Computer Systems Lab (CSL)'s homegrown Linux-based environment.

Our results found RedHat Linux 9 had the fewest number of crashes of the systems we tested and fewer than any previously tested systems, with 1 out of 60 programs failing (1.7%). MacOS X had a 10% crash rate in command-line tests. The CSL machines failure rate is 8.8% with 6 out of 68 programs crashing. At least 4 of these crashes were the result of obsolete software or versions of software. No X-Windows GUI applications crashed under RedHat, but 1 out of 16 hung. One out of 18 GUI applications crashed on MacOS X.

2. A Study of the Memory Utilization Patterns of Certain Important Applications Mariyam Mirza
  Abstract: We study the memory use patterns of a simple microbenchmark, a kernel build, a database server, and a web server. Our goal is to find the extent to which available physical memory size is a performance bottleneck for an application. We compare the real and virtual running times of the applications. We find that the amount of time the kernel build and the database server spend in the blocked state can only partly be attributed to page faults. We conclude that while page faults due to memory size limitations account for some of the user-observed latency of these applications, some other blocking activity contributes significantly to the latency as well.

We also investigate whether an application's memory requirement is constant throughout its execution, or whether it is phased in some way, so that optimizations can be targeted to the correct execution phase. We find that, with the exception of the kernel build, all the applications studied have well-defined, distinguishable phases of high and low memory use. This information is useful for anyone attempting to speed up these applications.

3. Bypassing License Checking using Dynamic Instrumentation Lakshmikant Shrinivas and Latif Alianto
  Abstract: With the increasing number of applications that can be downloaded and tried out before being purchased, it is very important to have the software secured from hacking attempts to bypass their security. Based on the premise that the existing commercial security systems are flawed, we have succeeded in bypassing the security of a typical shareware with a relatively small amount of effort. The technology key for these attacks is the ability to easily analyze and manipulate the running program. DynInst [3] is the tool we used to dynamically instrument, analyze and control the application software. The steps taken for the procedure were: (1) Profiling a running process to obtain structural information. (2) Controlling the execution of the program by dynamically loading new libraries. (3) Inserting new code sequences into the running program; and (4) Replacing individual call instructions or entire functions. This paper presents the procedure to bypass a 3D simulation model tool called AC3D [7]. Along with the discussions of vulnerabilities, we also discuss strategies to compensate or make the security system better.
4. Security Avoidance in a Windows Application using Dynamic Instrumentation Sriya Santhanam and Janani Thanigachalam
  Abstract: Numerous commodity Windows-based applications are available for use as free evaluation copies for limited periods of time. These applications constitute prime candidates for security attacks and we use the dynamic instrumentation capabilities provided by the DynInst API to demonstrate one such attack. While our attack strategy is generic to time-limited trial applications that use the Microsoft C runtime library, our chosen target application is the 30-day trial version of SecureCRT for Windows. The attack methodology uses program inspection and runtime code modification, and is applicable to both stripped and unstripped Windows binaries.
5. Implementation and Evaluation of WSClock and Load Control in Linux Vikas Garg and Mike Ottum
  Abstract: Linux currently uses a global LRU-based replacement policy. Our goal was to evaluate the page replacement policy in Linux and measure its performance under heavy memory loads against a new implementation of WSClock with Load Control. We have implemented both the WSClock and Load Control algorithms. The WSClock and Load Control algorithms were implemented independent of each other, so that we can measure the impact of Load Control on Global LRU policy and also the behavior of WSClock with the normal scheduler, in addition to measuring the impact of WSClock and Load Control together. We show that under heavy load, Linux gives preference to batch tasks over interactive tasks, resulting in priority inversion. WSClock maintains the correct priorities by providing nearly constant response rate for interactive tasks as system load increases.
6. Implementation of WSClock and Load Control in the Linux 2.6.5 Kernel Nidhi Aggarwal and Kyle Rupnow
  Abstract: Abstract: We replaced the existing Linux 2.6 kernel page replacement policy with the WSClock page replacement policy. WSClock is more effective than the existing policy because its combination of the working set and clock algorithms allows it to benefit from the locality of access of the working set while being simple to implement. We tested our implementation on benchmarks representing a range of workloads, including I/O intensive workloads, compute-intensive workloads, and interactive workloads. A kernel that included load control has up to 5% fewer page faults on individual benchmarks. We see that benchmarks with many page faults will improve the most. A kernel that includes the WSClock feature also has up to 7% fewer page faults than the baseline case. We see that a kernel including WSClock and Load Control outperforms the baseline kernel for all individual benchmarks. When multiple benchmarks are run simultaneously, we see that the WSClock + Load Control kernel has up to 25% fewer page faults than the baseline kernel.
Return to CS736 home page.

Last modified: Tue Dec 14 16:42:07 CST 2004 by bart