CS 736 Final Project Poster Session


Wednesday, December 21, 2016
10:00 am - Noon pm
1240 Computer Sciences

Everyone is welcome. Come see the results of these interesting projects. Refreshments will be served.

1. Performance Comparison of NTFS and ext4 Akshay Uttamani, Vaibhav Patel
Abstract: With the growing popularity of heavy disk I/O intensive applications, choosing an operating system with an efficient file system has become important. It is necessary to assess the performance of different file systems under their associated operating systems to help selecting the operating system for different application requirements[1]. In this paper, we take the two most pervasive file systems: NTFS under Windows environment and ext4 under Linux environment and generate workloads to compare them. We conduct experiments to evaluate the micro-benchmarking on sequential read, sequential write, random read, random write, prefetch size and metadata operations. We generate macro-benchmark workloads using apache-bench and explain their results using our micro-benchmarks. Our results show that file system performance is largely tied to how the data is laid out on disk and performance strategies like memory caching and prefetching. ext4 performs better for sequential, random read/write as well as metadata operations for the file sizes we experimented.
Poster:
2. Fuzz Testing on Android Guanqing Yan, Tianshuo Su
Abstract: The original fuzz testing started as a course project in the year 1988 at the University of Wisconsin - Madison. Based upon the simple idea that randomly generated input can be useful to automate software testing, the original fuzz testing project yielded a somewhat surprising result that 25-33% of the utility programs has crashed. Since then, fuzz testing has been extended to X window GUI applications, Windows NT and MacOS. In the age of mobile technology, we extended fuzz testing to Android, and tested the reliability for many popular applications.

We chose 18 popular applications from the Google Play Store. During testing, 6 applications crashed (33%), and 4 applications (22%) hung. Since we do not have the source code available, we can only speculate possible causes of crash or hangs based on the stack trace alone.

We found that using fuzz testing on mobile platforms presents a few challenges different from previous systems. For example, we did not test social applications since it could possibly disturb real users. At the current stage, fuzz testing cannot entirely replace manual testing, but it still provides important reliability checks.

Poster:
3. Fuzzing System Call Return Values Sripradha Karkala, Kavin Mani
Abstract: Bad programming practices can often lead to failure of a program, unexpected behaviors, loss, corruption of application data, and deadlocks. One such trivial oversight during program development is failure to check the return value of function calls. In this paper, we have challenged the robustness of programs by testing the issue of unchecked return values. We ran checks on several UNIX utilities and applications by failing the system calls pertaining to different domains including memory, file, process management and network operations. To conduct such an experiment, we wrote a wrapper shared library that returned the standard error values probabilistically on execution. This means that when a system call is invoked, instead of being sure that the call will be executed by the system, we deliberately fail the call and return failed return value. We adopted library interposition techniques to manipulate the target applications to use the overridden wrapper functions rather than the standard system calls. The result of our experiment was that 25% of the programs that were tested (30 out of 120) crashed. We managed to crash several popular UNIX utilities such as ps, netstat, ifconfig and some GUI-based applications like bitmap, firefox and evince (PDF viewer). The most commonly observed reasons for crashes were segmentation fault, deadlock, failure in shared libraries, incorrect loading of user interface (UI) and trace/breakpoint traps. We have analyzed them to identify the exact cause and the location of the failure. Evaluation of the crashes unearthed bugs in popular programs like gdb, vim and in shared libraries like glibc.
Poster:
4. Return Value Testing of Linux Applications Keith Funkhouser, Malcolm Reid, Colin Samplawski
Abstract: We used fuzz testing methods to investigate the robustness of various Linux applications. We used the LD_PRELOAD environment variable to perform library interposition for interception of system and library calls. Erroneous return values were injected into the calling applications probabilistically. In a suite of 88 small-scale utilities and large-scale programs, crashes (unintentional core dumps or hangs) were observed for at least one of the 19 intercepted calls in 31 of the 88 applications tested (35.2%). We found a greater incidence of crashes in large-scale applications as compared to small-scale utilities. Memory allocation library calls accounted for the majority of crashes, and small-scale applications crashed exclusively by core dump. Failure to check return values continues to lead to unexpected program behavior, even in some of the most popular open source projects (e.g. Firefox, VLC, and gcc).
Poster:
5. Benchmarking File Systems: EXT4, NTFS, ZFS Aribhit Mishra, Mickey Barboi, Mingi Kim
Abstract: Modern operating systems have a wide range of fileystems to choose from. Each of these is finely-tuned and presumably well-maintained, but they each offer different features and performance. In this paper we compare three popular file systems: EXT4, ZFS, and NTFS. First, we compare the advertised features and structural differences by analyzing their design and published literature. We then construct macrobenchmarks with realistic workloads using popular consumer applications - Apache webserver, Go webserver, and postgresql relational database - to test the file systems at a high level. Finally, we develop microbenchmarks to use as evidence to help explain differences found in macrobenchmark results. In the scope of our chosen applications we find that EXT4 generally leads our results in terms of performance while NTFS lags behind.
Poster:
Return to CS736 home page.

Last modified: Thu Dec 22 14:52:24 CST 2016 by bart