next up previous
Next: Effect of Low Bandwidth Up: Measuring Proxy Performance with Previous: Performance of Example Systems

Effect of Adding Disk Arms

Several studies have claimed that disks are a main bottleneck in the performance of busy proxies. We wanted to analyse the impact of spreading the cached files over multiple disks on proxy performance. We expected that by increasing the number of disks, queueing overheads would be reduced, the time spent servicing each disk request would be shortened and, ultimately this would reflect on the overall performance of the proxy.


 
Table 1:   The effect of multiple disks on Squid performance.
Metrics one disk two disks nocaching
First phase latency (s) 9.00 (2%) 9.88 (1%) 3.65 (0.6%)
Second phase latency (s) 5.44 (1.1%) 5.89 (5%) 3.42 (2%)
First phase errors      
Second phase errors      
Hit Ratio 20.29 (1%) 22.50 (1%)  
Byte Hit Ratio 5.67 (22%) 11.52 (9%)  


 
Table 2:   The effect of multiple disks on Proxy N performance.
Metrics one disk two disks nocaching
First phase latency (s) 9.82 (5%) 8.78 (2%) 6.71 (5%)
Second phase latency (s) 5.42 (4%) 4.91 (5%) 5.34 (6%)
First phase errors 86.67 (4%) 68.67 (7%) 77.67 (17%)
Second phase errors 12.67 (32%) 14.33 (20%) 16.33 (52%)
Hit Ratio 21.61 (2%) 22.46 (0.7%)  
Byte Hit Ratio 12.18 (56%) 21.70 (31%)  

Only two of the four proxy servers in our testbed allowed us to specify multiple directories for cache storage: Squid and Proxy N. In this section, we present results for both Squid and Proxy N when the cache storage is spread over one and two disks. We also compare these results with those collected when caching was disabled. For these experiments, we used the same cache size (75 MB), since we are interested only in undertanding the impact of one extra disk in proxy performance. Table  1 shows these results for Squid. Surprisingly, there is no improvement in the overall performance when we add an extra disk. Actually, there is a small slowdown in latency of both phases, but hit ratio remains around the same. Table  2 shows results for Proxy N. For Proxy N, the extra disk guaranteed an improvement of 10% in client latency. Hit ratio remained around the same. However, the number of errors occurred in the first phase decreased by 20%. This is probably a side effect of the improvement in latency: because requests are handled faster, new requests can be taken out from the pending connections queue faster and the probability of finding this queue full decreases.

Comparing Proxy N's latency for two disk and no caching experiments, it can be seen that adding the extra disk indeed alleviates the disk bottleneck for this proxy. In the no-caching experiment, there is minimal load on the disk. In the caching experiment, when the proxy has only one disk arm to use, the client latency in the first phase is increased by 46%, mostly contributed by the disk bottleneck. However, when the proxy has two disk arms to use, the latency increase is limited to only 30%. Furthermore, when the proxy has two disk arms to use, the processing of cache hits is sped up and the second phase latency is significantly improved.

 
Table 3:   Proxy N resource consumption for one disk, two disk and no caching experiments.
Metrics one disk two disks nocaching
Disk 1 - reads/s - 10.60 (7%) -
Disk 1 - writes/s - 41.43 (10%) -
Disk 1 - Kbytes read/s - 43.53 (9%) -
Disk 1 - Kbytes written/s - 239.96 (5%) -
Disk 1 - wait - 0.068 (31%) -
Disk 1 - svc_t - 106.96 (8%) -
Disk 1 - busy (%) - 79.11 (9%) -
Disk 2 - reads/s 7.06 (4%) 7.00 (10%) 0.0106 (153%)
Disk 2 - writes/s 57.47 (4%) 26.08 (12%) 0.53(4%)
Disk 2 - Kbytes read/s 18.09 (11%) 29.54 (8%) 0.0576 (158 %)
Disk 2 - Kbytes written/s 344.09 (1%) 156.53 (16%) 4.02(5%)
Disk 2 - wait 1.82 (11%) 0.00168 (67%)  
Disk 2 - svc_t 174.46 (2%) 67.53 (8%) 24.18 (4%)
Disk 2 - busy (%) 94.36 (3%) 48.83 (12%) 1.14 (4%)
cpu idle (%) 76.17 (0.8%) 72.79 (4%) 80.11 (0.7%)
cpu user (%) 6.57 (3%) 7.36 (16%) 6.72 (1%)
cpu system (%) 17.25 (3%) 19.83 (11%) 13.17 (4%)
page-in/s 31.54 (2%) 39.30 (10%) 0.062 (173%)
page-out/s 180.57 (4%) 198.40 (10%) 0.539(173%)


 
Table 4:   Squid resource consumption for one disk, two disk and no caching experiments.
Metrics one disk two disks nocaching
Disk 1 - reads/s - 12.93 (5%) -
Disk 1 - writes/s - 17.30 (4%) -
Disk 1 - Kbytes read/s - 61.60 (7%) -
Disk 1 - Kbytes written/s - 134.48 (3%) -
Disk 1 - wait - 0.000778 (23%) -
Disk 1 - svc_t - 79.62 (2%) -
Disk 1 - busy (%) - 43.71 (3%) -
Disk 2 - reads/s 14.71 (6%) 13.35 (5%) 0.0797 (12%)
Disk 2 - writes/s 28.98 (12%) 17.56 (3%) 0.663 (14%)
Disk 2 - Kbytes read/s 58.48 (9%) 69.17 (5%) 0.243 (45%)
Disk 2 - Kbytes written/s 207.07 (15%) 134.50 (3%) 6.67 (12%)
Disk 2 - wait 0.80 (18%) 0.0041 (37%)  
Disk 2 - svc_t 153.23 (11%) 80.56 (2%) 25.69 (10%)
Disk 2 - busy (%) 58.62 (8%) 43.50 (2%) 1.297 (16%)
cpu idle (%) 78.59 (2%) 79.14 (0.5%) 67.88 (8%)
cpu user (%) 7.04 (7%) 6.73 (1%) 10.74 (18%)
cpu system (%) 14.32 (7%) 14.11 (2%) 21.35 (18%)
page-in/s 37.45 (9%) 47.96 (0.7%) 0.0433 (128%)
page-out/s 63.18 (34%) 100.97 (6%) 3.01 (86%)

However, despite the improvements, we were very surprised with these results, since we expected a more drastic improvement in latency. During those experiments, we also collected disk, processor and paging activities using vmstat and iostat in order to have a better picture of how the system resources are consumed by the proxies.

Table  3 show the main statistics collected by these tools for Proxy N. Disk is clearly a bottleneck, since it was busy over 94% of time. With the extra disk, read and write requests were spread over two disks and, as a consequence, service time (svc_t) which is the average time spenting servicing a request dropped significantly, due mainly to reduction of queueing delay. The average queue length ( wait ) drops to almost 0 for both disks. The total number of read and write operations is bigger for the two-disk experiment as a consquence of less contention to disk access. Disk throughput increases as well as disk bandwidth. Processor and paging activity remains almost the same, with slight increase in cpu utilization, both in terms of user and system modes. However, the statistics show that the load is not evenly distributed between the disks: disk one received a bigger number of requests. This may be one explanation for the less-than-expected improvements from the two disk arms, and we are still investigating the issues.

Table  4 shows similar results for Squid. It is clear that the disk bottleneck was reduced when one extra disk was added. The service time (svc_t) was reduced by almost 50% for both disks. The queue length ( actv) and busy time were also reduced. As a consequence, disk throughtput and bandwidth increased. However, this improvements were not reflected in the client latency. Processor utilization remained about the same but paging activity increases when the extra disk is added. We are still in the process of finding out why Squid behaves sub-optimally, and why Squid and Proxy N behaves so differently.


next up previous
Next: Effect of Low Bandwidth Up: Measuring Proxy Performance with Previous: Performance of Example Systems
Pei Cao
4/13/1998