next up previous
Next: Effects of Limiting History Up: Potential and Limits of Previous: The Predictor

Performance

  Using the UCB traces, we study the performance of the prediction algorithm varying the parameters m, l, and t. The user-side prefetch buffers are assumed to be 2 Megabytes. The performance metrics are the same as described in Section 4.1. We also study the performance effect of limiting the size of the history structure, and that of imperfect interaction with user-side prefetch buffers.

In our first set of simulations, we assume that history structure can occupy up to 64MB, and the interaction between the proxy and the user-side prefetch buffers are as described in Section 3.2. We experimented with the prefix depth m being 4 and 1, the search depth l being 8, 4, and 1, and the threshold being 0.01, 0.10, and 0.25. We simulated two modem speeds: 56Kbps and 28.8Kbps.


  
Figure 7: Fraction of requests that are serviced by the prefetch buffer (the first number specifies the prefix depth, the second specifies the search depth, the last specifies the threshold).
\begin{figure*}
\psfig {figure=graphs/real_saves.eps,angle=270,height=3in}\end{figure*}


  
Figure 8: Requests serviced by prefetch buffer, latency savings, extra bandwidth.
\begin{figure*}
\psfig {figure=graphs/real_everything.eps,angle=270,height=3in}\end{figure*}


  
Figure 9: Breakdown of latency savings.
\begin{figure*}
\psfig {figure=graphs/real_breakdown.eps,angle=270,height=3in}\end{figure*}

The performance results assuming 56Kbps modem lines are shown in the following three figures. Figure 8 shows three bars for each algorithm configuration, illustrating the request savings, latency reduction, and wasted bandwidth. Figure 9 shows the breakdown of latency reduction due to latency hidden and contention avoidance. Figure 7 shows the breakdown of requests savings in to three categories.

Figure 8 shows that for 56Kbps modem lines, the pre-pushing scheme can reduce user-visible latency by 9.3% at a cost of 52% wasted bandwidth (at (1, 8, 0.01)), by 8.2% at a cost of <20% wasted bandwidth (at (4, 8, 0.10) or (1, 8, 0.10)), and by about 7% at a cost of about 10% of wasted bandwidth (at (4, 8, 0.25) ). Thus, there is an inherent tradeoff between the latency reduction and wasted banwidth: the more aggresive the prediction algorithm, the more chances it pushes useful documents, and the more chances it pushes the useless documents. On the other hand, a balance between the latency reduction and wasted bandwidth can be achieved. Depending on the environment and the effect of the wasted bandwidth, any one of the above configurations can be desirable.

Figure 8 also shows that the threshold t has the biggest effect on wasted bandwidth. This is to be expected since a lower threshold means more files are pre-pushed. Within the same threshold, increasing the search depth l tends to increase both wasted bandwidth and latency reduction, particularly for very low thresholds. This is because under low thresholds, higher search depth will generate a lot of candidate documents. Thus, for low threshold, if the wasted bandwidth is a concern, l=1 seems to be most appropriate. However, for high threshold (e.g. 0.25), higher search depth will yield more documents that satisfy the threshold constraints and are useful, and a high search depth (for example, l=4) is more appropriate. Finally, note that the performance of l=4 is very close to that of l=8.

Looking at the prefix depth m, we see that it has different effects depending on the threshold t. For t=0.01, higher prefix depth reduces wasted bandwidth and results in slightly lower latency reduction. For t=0.10, the effect of high prefix depth is minimal. For t=0.25, higher prefix depth increases the latency reduction and wasted bandwidth. The reason is that for low threshold, higher m finds the documents that are more specific to the context (and therefore more likely to be references) and move them to the front of the prepush list. In other words, it improves the accuracy of the prediction. Thus, it reduces the bandwidth wastes without affecting latency reduction much. For high threshold, higher m uses more previous accesses as context information, and increase the probability of the context-specific documents because the sample size for longer contexts is smaller. Thus, more context-specific documents are made eligible for prediction.

Figure 9 shows that the latency reduction from contention avoidance is only a small percentage of the total latency reduction. This is consistent with our study of perfect predictors with small lookahead. Clearly, contention avoidance relies on predictions that are far into the future.

Figure 7 shows that there is only a small percent of requests that hit on documents that are currently being pushed, and the caching effect of the prefetch buffer counts for about 38% of the request savings. In other words, about 38% of the latency reductions observed is in fact due to the caching effect of the prefetch buffer. This first suggests that in the UCB traces, the users are probably using browser caches that are too small. In general, the browser cache should be increased. Second, assuming that the first step in reducing user latency is to increase browser cache size, we should then study the performance of the algorithm assuming a larger browser cache. Since requests that hit in the browser cache are not reflected to the proxy, the effect of a larger browser cache on the algorithm's performance is not clear. We are currently investigating this question. Third, comparing with the results in Section 4, we see that another 2MB increase to the browser cache seems to be sufficient.

Finally, comparing the latency reductions with the Limit studies in Section 4, we see that the prediction algorithm achieves about half of the latency reduction under a perfect predictor.




  
Figure 10: Fraction of requests that are serviced by the prefetch buffer (the first number specifies the prefix depth, the second specifies the search depth, the last specifies the threshold).
\begin{figure*}
\psfig {figure=graphs/288_saves.eps,angle=270,height=3in}\end{figure*}


  
Figure 11: Requests serviced by prefetch buffer, latency savings, extra bandwidth.
\begin{figure*}
\psfig {figure=graphs/288_everything.eps,angle=270,height=3in}\end{figure*}


  
Figure 12: Breakdown of latency savings.
\begin{figure*}
\psfig {figure=graphs/288_breakdown.eps,angle=270,height=3in}\end{figure*}

Figures 10 through 12 shows the corresponding results for 28.8Kbps modems. As we can see, the latency reduction and the request savings are generally reduced by 10% comparing to those under 56Kbps modems. This is because that in many cases, there isn't enough time to prepush a document to the client. The increase of the partial prefetch requests demonstrates it. The effect of the parameters are the same as for 56Kbps modems.



Although we only show the threshold values up to 0.25, we have experimented with higher threshold values, and the trend is the same: the wasted bandwidth is reduced, the latency reduction is reduced, the better performance relies more on higher search depth and higher prefix depth.



 
next up previous
Next: Effects of Limiting History Up: Potential and Limits of Previous: The Predictor
Pei Cao
4/13/1998