Next: Limit Study Results
Up: Potential of Proxy-Side Pre-Pushing
Previous: Potential of Proxy-Side Pre-Pushing
We are primarily interested in the following performance metrics for our
prediction algorithms:
- Latency Reduction: the percentage reduction in the user-visible
latency from the prepush technique. The latency reduction comes from
two sources:
- latency hidden: the user-visible latency that is either avoided
because the requested document is already in the prefetch
buffer (completely hidden), or reduced because it is
being pre-pushed (partially hidden).
- contention avoidance: the reduction in user latency that is
due to the document being transfered to the user earlier
than in the no-prefetching case. When multiple documents are
being sent to the user, the transfers share the limited modem
bandwidth. If prefetching can make one of the document
transfers happen earlier, then the rest of the document
transfers all complete quicker and result in reduction in
user-visible latency.
- Wasted Bandwidth: the amount of bytes that are pre-pushed from
the proxy to the client but are not read by the client. That is, it is the
sum of the sizes of files that are pre-pushed from the proxy to the user but
are replaced from the user-side prefetch buffer without ever being accessed
by the user. In our following figures we show the ratio between bandwidth
wasted and the total bytes accessed by the users.
- Request Savings: the number of time that the user requests a
document that is already in the prefetch buffer, or is being sent from
proxy to the user. We can further characterize the request savings into the
following three categories:
- prefetched: the document is in the prefetch buffer, and
and the user is accessing it for the first time since it is
in the prefetch buffer.
- cached: the document is in the prefetch buffer, but this is not
the first time that the uses has accesses it since it has been in
the prefetch buffer.
- partially prefetched: the document is still being sent from
the proxy to the user.
The reason for separating the request savings into the three categories is that
we would like to separate the caching effect of the prefetch buffer from
the prefetching effect. The prefetch buffer also acts as an extended browser
cache for the user, and to understand the latency reduction due to the
prediction algorithm, it is important to separate the two effects.
Latency reduction is the primary goal of the pre-push scheme. Bandwidth
wasted measure the extra bandwidth consumed by the algorithm. Unlike in the
wide-area network, wasted bandwidth on the modem line can be tolerated:
the modem line would stay idle anyhow. For most users who do not initiate other
network transfers (such as ftp) while Web-surfing, the wasted bandwidth has
virtually no effect. Finally, we need to investigate request savings to
understand the source of latency reduction.
Next: Limit Study Results
Up: Potential of Proxy-Side Pre-Pushing
Previous: Potential of Proxy-Side Pre-Pushing
Pei Cao
4/13/1998