This distribution is version 2 of the Self-Scaling Benchmark, dated 2/10/95.

There are no substantial changes from the original release, just a couple
    bug fixes and changes to make it more portable.
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
The following explanation assumes these files reside in a directory called
<topLevel>.

This directory contains the sources for a self-scaling I/O benchmark.  The
main program is adaptWl.  adaptWl is the parent program
which controls the self-scaling benchmark, using specWl and doWl as
child processes.  For each workload point, adaptWl
calls specWl with the right arguments to create a cmd script.  This script is
read and carried out by doWl.

You should only have to run adaptWl.  This is generally done by a csh script,
such as the supplied script "go".

For any of these programs, you can get help on how to run them by
typing program-name -help.  E.g. "adaptWl -help" gives you help on running
adaptWl.

To run, here's what to do:
    1) Decide which directory you are testing.  One of the arguments to
	adaptWl is "-d <testDir>".  This directory should be on
	a different file system than <topLevel>.
    2) Tailor the file flushClientServer.  This script flushes the
	system cache by un-mounting and re-mounting the file system which
	contains the test directory.  The only things you have to change
	are the variable definitions for TEMPFILESYSTEM and DEVICE.  This
	script also needs to be given root setuid privileges or be run as
	root.

	If this is impossible, life is more complicated, but you can try the
	following:
	    a) add the flag "-nf" to the adaptWl call in the go script
	    b) manually specify the maximum value of uniqueBytes (without
		being able to flush the file cache, adaptWl won't be able to
		automatically figure out what the max value of uniqueBytes
		should be.)  This is done via the "-u" option to adaptWl.
		Change it from "-u 1000-1000-1500000", which gives the
		maximum value as 1.5 GB to "-u 1000-1000-X", where X is
		twice the max file cache size (in KB).
    3) Make the binaries.  This can be done by typing "make" in the
	<topLevel> directory.
    4) Tailor the go file.  The one thing you have to tailor is
	the TESTDIR definition (this should be the testing directory on the
	file system you're testing).
    5) Run go.  Output will go to out/explore.  "go" tries to do several
	things automatically.
	a) it tries to narrow the range of the workload parameters--
	    sizeMean, processNum, and uniqueBytes.
	b) it tries to find the plateaus in the uniqueBytes graph.  A
	    plateau is basically a region of performance.  For example,
	    most machines have two plateaus--file cache performance
	    (from small uniqueBytes to the max file cache size) and
	    disk performance (from the max file cache size on up).
	    adaptWl goofs sometimes when it tries to automatically find these
	    plateaus.  If, after looking at the uniqueBytes graph,
	    you think the plateaus are wrong, you can manually specify
	    the midpoints of the plateaus with the "-fu" option ("fu" stands for
	    focal point of uniqueBytes).  For instance, if you think the
	    plateau midpoints are 10 MB and 100 MB, then specify
	    "-fu 10000,100000" to adaptWl.
	    You can look at the uniqueBytes graph in the out/explore
	    by searching for "gti uniqueBytes".
    6) To get the final output, run gnuplotWl.  This can be done by
	"gnuplotWl out/explore".  This will generate N families of graphs,
	where each family consists of 1 graph per workload parameter
	(uniqueBytes, readFrac, sizeMean, processNum, seqFrac).  N is the
	number of plateaus in uniqueBytes.  The output files will be in
	the postscript files out/explore*.ps.  If you lack gnuplot, you
	can look at the data files "out/explore*.gnudata", which consist
	of 2 data sets of (x y) pairs.  The first data set is the actual
	data; the second data set is a vertical line indicating the knee
	value for the parameter.

Beware--since this software is in alpha test, it prints out lots of debugging
output.  Also, little attempt has been made to protect this benchmark from
optimizations.  For instance, the data used to do the I/O is completely
synthetic (and compresses exceedingly well).  Much more work will need to be
done if this ever gets used as a standard I/O benchmark.  It's intended
primarily to help you understand the performance of your computer and operating
system.

Feel free to e-mail me when you start to run this.  It will require some
interaction to get this going and to understand the results.

Peter M. Chen
(313) 763-4472
pmchen@eecs.umich.EDU

--------------------- some miscellaneous run notes ------------------------

1) Results from running adaptWl may be unstable (i.e. the graphs are very
jagged).  If this is the case, try increasing the minimum run time (default
is 60 seconds).  That way, each workload will run longer and (hopefully)
results will be more stable.  This can be done by modifying the "go" scripts
by adding the option "-mi 120", which sets the minimum running time to 120
seconds.

2) This benchmark requires a variable amount of disk space, depending on
how large your file cache is.  In general, it takes 2-3 times as much
space as the maximum file cache size.

3) If you are trying to exercise multiple file systems, create a test
    directory somewhere (doesn't really matter where) and put symbolic
    links in that directory to directories on the file systems you're
    testing.  The names of the links should look like child.0.data,
    child.1.data, child.2.data, etc. (up to child.10.data), or more
    if you use more than 10 processes.  Make sure you specify the symbolic
    links with absolute pathnames.  Then set TESTDIR to be the directory with
    the symbolic links.
