Untangling ZFS Policies
Swaminathan Sundararaman and Sriram Subramanian
Abstract: The ZFS file system from Sun is the latest buzz word
in the file system community. The creators of ZFS claim to have
re-designed the file system from scratch, providing new features and
levels of reliability, performance and efficiency uncommon in
traditional file systems. This includes dynamic block allocation that
changes the blocks sizes based on workloads. In this paper we
have primarily focused on the Block allocation policy of ZFS under
varied workloads. We have built our infrastructure based on semantic
block analysis and found that ZFS allocates block based on the file
offset that are being written and not based on the
workload. Block allocation policy works poorly for random writes.
We also found that ZFS merges smaller blocks to one big block and as a
result a single block write gets converted to
read-modify-write of a bigger block.
ZFS Intent log also has a poor block allocation policy and for small
block writes it wastes a significant amount of storage space in the
file system. We also have preliminary results about
ZFS trying to dynamically modify the block writing policy based on the
workload and found in a few cases where it smartly modified its block
writing policy to provide constant throughput.
We also have preliminary results on the effect of memory pressure on
versioning mechanism in ZFS. We find that ZFS tells more that it
actually does.
Available as: PDF
Presentation available as: PPT
Code: code.tar.gz