next up previous
Next: Configuration File Up: Wisconsin Proxy Benchmark Previous: Client Processes

Server Processes

The server process listens on a particular port on the server machine. When the server receives an HTTP request, it parses the URL and finds the filenum. It then chooses a random number (according to a particular distribution function) as the size of the HTML document, and forks a child process. The child process sleeps for a specified number of seconds, then constructs an array of the specified size, fills the array with string "aaa[filenum]", replies to the HTTP request with the array attached to a pre-filled response header, and then exits. The response header specifies that the "document" is last modified at a fixed date, and expires in three days. The server process also makes sure that if it is servicing a request it has serviced before, the file size and array contents stay the same.

The sleeping time in the child process models the delay in the Internet and the server seen by the proxy. We feel that it is important to model this delay because in practice, latency in serving HTTP requests affects the resource requirement at the proxy. Originally, we set the sleeping time to be a random variable from 0 to 10 seconds, to reflect the varying latency seen by the proxy. In our preliminary testing, in order to reduce the variability between experiments, we have change the latency to be a constant number of seconds that can be set through a command line argument. We are still investigating whether variable latencies would expose different problems in proxy performance from constant latency. For now, we recommend using a constant latency (see below, benchmarking rules).

Currently, the server process does not support other types of GET requests, such as conditional GET, which we will fix soon. The server process also gives the fixed last-modified-date and time-to-live for every response, which would be changed as we learn more about the distribution of TTL in practice.

The server program uses two different file size distributions. The default distribution is very primitive. It is basically a uniform distribution from 0 to 40K bytes for 99% of the requests, and 1MB for 1% of the requests. It is also possible to use a more realistic file size distribution, such as the heavy tail Pareto distribution. In this case, the two parameters of the distribtuion, $\alpha$ and k, must be specified in the configuration file. The parameter k represents the minimum file size and $\alpha$ is such that the average file size av is given by   $\alpha = \frac{average}{average-k}
$. Typical values of $\alpha$ and k are 1.1 and 3.0KB, as shown in  [4].


next up previous
Next: Configuration File Up: Wisconsin Proxy Benchmark Previous: Client Processes
Pei Cao
4/13/1998