next up previous
Next: Related Work Up: Summary Cache: A Scalable Previous: Prototype Implementation

Experiments

 
 
Table 6: Performance of ICP and Summary-Cache for UPisa trace in Experiment 2. Numbers in the parenthesis show the variance of the measurement among three experiments.
Exp Hit Ratio Client Latency User CPU System CPU UDP Traffic TCP Traffic Total Packets
no ICP 16.94 6.22(0.4%) 81.72(0.1%) 115.63(0.1%) 4718(1%) 242K(0.1%) 259K(0.1%)
ICP 19.3 6.31(0.5%) 116.81(0.1%) 137.12(0.1%) 72761(0%) 245K(0.1%) 325K(0.2%)
Overhead   1.42% 43% 19% 1400% 1% 25%
SC-ICP 19.0 6.07 (0.1%) 91.53(0.4%) 121.75(0.5%) 5765(2%) 244K(0.1%) 262K(0.1%)
Overhead   -2.4% 12% 5% 22% 1% 1%


 
Table 7: Performance of ICP and Summary-Cache for UPisa trace in Experiment 3.
Exp Hit Ratio Client Latency User CPU System CPU UDP Traffic TCP Traffic Total Packets
no ICP 9.94 7.11 81.75 119.7 1608 248K 265K
ICP 17.9 7.22 121.5 146.4 75226 257K 343K
Overhead   1.6% 49% 22% 4577% 3.7% 29%
SC-ICP 16.2 6.80 90.4 126.5 4144 254K 274K
Overhead   -4.3% 11% 5.7% 1.6 2.4% 3.2%

We run three experiments with the prototype. The first experiment repeats the test in Section 4 and the results are included in Table 2 in Section 4, under the title ``SC-ICP.'' The improved protocol reduces the UDP traffic by a factor of 50, and has network traffic, CPU times and client latencies similar to those of No-ICP.

Our second experiment and third experiment replay the first 24,000 requests from the UPisa trace. We use a collection of 80 client processes running on 4 workstations, and client processes on the same workstation connect to the same proxy server. In the second experiment, we replay the trace by having each client process emulate a set of real-life clients through issuing their Web requests. In the third experiment, we replay the trace by having the client processes issuing requests round-robin from the trace file, regardless of which real-life client each request comes from. The second experiment preserves the bounding between a client and its requests, and a client's requests all go to the same proxy. However, it does not preserve the order among requests from different clients. The third experiment does not preserve the bounding between requests and clients, but do preserve the timing order among the requests. The proxies are more load-balanced in the third experiment than in the second experiment.

In both experiments, each request's URL carries the size of the request in the trace file, and the server replies with the specified number of bytes. The rest of the configuration is similar to the experiments in Section 4. Different from the synthetic benchmark, the trace contains a noticeable number of remote hits. The results from Experiment 2 are listed in Table 6, and those from Experiment 3 are listed in Table 7.

The results show that the enhanced ICP protocol reduces the network traffic and CPU overhead significantly, while only slightly decreasing the total hit ratio. The enhanced ICP protocol lowers the client latency slightly compared to the No-ICP case, even though it increases the CPU time by about 12%. The reduction in client latency is due to the remote cache hits. Separate experiments show that most of the CPU time increase is due to servicing remote hits, and the CPU time increase due to MD5 calculation is less than 5%. Though the experiments do not replay the trace faithfully, they do illustrate the performance of summary cache in practice.

Our results indicate that the summary-cache enhanced ICP solves the overhead problem of ICP, requires minimal changes, and enables scalable Web cache sharing over a wide-area network.


next up previous
Next: Related Work Up: Summary Cache: A Scalable Previous: Prototype Implementation
Pei Cao
7/5/1998