next up previous
Next: Recommended Configurations Up: Summary Cache Previous: Bloom Filters - the

Bloom Filters as Summaries

 
  
Figure 5: Total hit ratio under different summary representations.
\begin{figure*}
\psfig {figure=forms-results/hit.jgr.latex.ps}\end{figure*}


  
Figure 6: Ratio of false hits under different summary representations. Note that the y-axis is in log scale.
\begin{figure*}
\psfig {figure=forms-results/falsehit.jgr.latex.ps}\end{figure*}


  
Figure 7: Number of network messages per user request under different summary forms. Note that the y-axis is in log scale.
\begin{figure*}
\psfig {figure=forms-results/msgs.jgr.latex.ps}\end{figure*}


  
Figure 8: Bytes of network messages per user request under different summary forms.
\begin{figure*}
\psfig {figure=forms-results/msgs-size.jgr.latex.ps}\end{figure*}


  
Figure 9: Memory requirement of different summary representations.
\begin{figure*}
\psfig {figure=forms-results/mem-ratio.jgr.latex.ps}\end{figure*}

Bloom filters provide a straightforward mechanism to build summaries. A proxy builds a Bloom filter from the list of URLs of cached documents, and sends the bit array plus the specification of the hash functions to other proxies. When updating the summary, the proxy can either specify which bits in the bit array are flipped, or send the whole array, whichever is smaller.

Each proxy maintains a local copy of the bloom filter, and updates it as documents are added to and replaced from the cache. As explained, to update the local filter, a proxy maintains an array of counters, each counter remembering the number of times the corresponding bit is set to 1. When a document is added into the cache, the counters for the corresponding bits are incremented; when it is deleted from the cache, the counters are decremented. When a counter increases from 0 to 1 or drops from 1 to 0, the corresponding bit is set to 1 or 0, and a record is added to the list remembering the updates.

The advantage of Bloom filters is that they provide a tradeoff between the memory requirement and the false positive ratio (which induces false hits). Thus, if proxies want to devote less memory to the summaries, they can do so at a slight increase of inter-proxy traffic.

We experimented with three configurations for Bloom filter based summaries: the number of bits being 8, 16, and 32 times the average number of documents in the cache (the ratio is also called a ``load factor''). The average number of documents is calculated by dividing the cache size by 8K (the average document size). All three configurations use four hash functions. The number of hash functions is not the optimal choice for each configuration, but suffices to demonstrate the performance of Bloom filters. The hash functions are built by first calculating the MD5 signature [38] of the URL, which yields 128 bits, then dividing the 128 bits into four 32-bit word, and finally taking the modular of each 32-bit word by the table size m. MD5 is a cryptographic message digest algorithm that hashes arbitrary length strings to 128 bits [38]. We select it because of its well-known properties and relatively fast implementation.

The performance of these summary representations, the exact-directory approach, and the server-name approach are shown in Figures 5 through 9. In Figure 5 we show the total cache hit ratios and in Figure 6 we show the false hit ratios. Note that the y-axis in Figure 6 is in log scale. The Bloom filter based summaries have virtually the same cache hit ratio as the exact-directory approach, and have slightly higher false hit ratio when the bit array is small. Server-name has a much higher false hit ratio. It has a higher cache hit ratio, probably because its many false hits help to avoid false misses.

Figure 7 shows the total number of inter-proxy network messages, including the number of summary updates and the number of query messages (which includes remote cache hits, false hits and remote stale hits). Note that the y-axis in Figure 7 is in log scale. For comparison we also list the number of messages incurred by ICP in each trace. All messages are assumed to be uni-cast messages. The figure normalizes the number of messages by the number of HTTP requests in each trace. Both exact-directory and Bloom filter based summaries perform well, and server-name and ICP generate many more messages. For Bloom filters, there is a tradeoff between bit array size and the number of messages, as expected. However, once the false hit ratio is small enough, false hits are no longer a dominant contributor to inter-proxy messages. Rather, remote cache hits and remote stale hits become dominant. Thus, the difference in terms of network messages between load factor 16 and load factor 32 is small. Compared to ICP, Bloom filter based summaries reduce the number of messages by a factor of 25 to 60.

Figure 8 shows the estimated total size of inter-proxy network messages in bytes. We estimate the size because update messages tend to be larger than query messages. The average size of query messages in both ICP and other approaches is assumed to be 20 bytes of header and 50 bytes of average URL. The size of summary updates in exact-directory and server-name is assumed to be 20 bytes of header and 16 bytes per change. The size of summary updates in Bloom filter based summaries is estimated at 32 bytes of header (see Section 6) plus 4 bytes per bit-flip. The results show that in terms of message bytes, Bloom filter based summaries improves over ICP by 55% to 64%. In other words, summary cache uses occasional burst of large messages to avoid continuous stream of small messages. Looking at the CPU overhead and network interface packets in Tables 2,  6 and 7 (in which SC-ICP stands for the summary cache approach), we can see that it is a good tradeoff.

Finally, Figure 9 shows the memory per proxy of the summary cache approaches, in terms of percentage of cache size. The three Bloom filter configurations consume much less memory than exact-directory, and yet perform similarly to it in all other aspects. The Bloom filter summary at the load factor of 8 has a similar or less memory requirement to the server-name approach, and much fewer false hits and network messages. Since the DEC, UCB, UPisa, NLANR and Questnet traces have 16, 8, 8, 4, and 12 proxies respectively, the figure shows that the memory requirement grows linearly with the number of proxies.

Considering all the results, we see that Bloom filter summaries provide the best performance in terms of low network overhead and low memory requirements. This approach is simple and easy to implement. In addition to MD5, other faster hashing methods are available, for instance hash functions can be based on polynomial arithmetic as in Rabin's fingerprinting method (See [42,7]), or a simple hash function (e.g. [22, p. 48]) can be used to generate, say 32 bits, and further bits can be obtained by taking random linear transformations of these 32 bits viewed as an integer. A disadvantage is that these faster functions are efficiently invertible (that is, one can easily build an URL that hashes to a particular location), a fact that might be used by malicious users to nefarious purposes.


next up previous
Next: Recommended Configurations Up: Summary Cache Previous: Bloom Filters - the
Pei Cao
7/5/1998