« Scaling Memcache at Facebook | Main | »

Epidemic algorithms for replicated database maintenance

Demers et al., Epidemic algorithms for replicated database maintenance, PODC 1987.

Reviews due Thursday, 9/18.

Comments

problem:
- there are multiple servers and each maintain their own data base and the problem is synchronizing the data bases.
- they provide improvement over the previous works to decrease traffic and delay

solutions:
- they looked at different solutions:
- the original solution was that the servers that receive an update send an email to all other servers.
- the problem was that the mails weren't reliable, also some servers might not know about all the servers
- another solution that was suggested before was anti-entropy: each server select another server at random and compare their record
- one problem with this method was traffic
- there was two improvement over the naive method
- have a list of recent updates and first compare them
- then compare the check sum of the data bases
- if the check sum was different check the whole data base
- another solution was rumor mongering
- servers would have list of hot updates and select another server at random and send them the updates. and remove the updates from hot list with probability 1/k
- an improvement over this design was using a counter for failure (sending message to a server that already has the update) instead of fixed probability. another improvement was having a feedback.
- this method has the advantage of having less traffic; the disadvantage is that there is a chance that some of servers don't receive the update
- this could be fix by anti-entropy cycle
- another improvement was selecting the random servers based on the spatial distribution instead of uniform distribution.

- another point in the paper was using the dormant death certificates to handle deleted records.
- another interesting point was that push and pull would have different distributions and depending on the probability of having updates, they could select the one that is best.

evaluation:
- for evaluation, they looked at traffic, average delay, maximum delay, and the percentage of servers that didn't receive the updates
- they provided mathematical proof for some of the advantages and performed simulations for others.

I'm not sure, but it seems like because of the increase in network speeds, the problem is not as important any more.
for example in the previous paper we saw that they used only one server for writing and updated other servers later.

SUMMARY: The paper describes methods for maintaining consistency between several replicas of a database. Some of the methods are modeled on studies in the field of epidemiology, where database updates are treated as infectious diseases, and can be tuned to spread as quickly as possible.

PROBLEM SOLVED: The Xerox Corporate Internet was experiencing very high traffic and inconsistency among replicas using traditional methods of replication such as Direct Mail and Anti-Entropy. A new method was needed that could drive the replicas toward consistency with lower network traffic.

CONTRIBUTIONS: By realizing that anti-entropy can be modeled as an epidemic, they were able to use other epidemic models such as rumor-mongering, and by treating them as epidemics specifically tune the parameters of each model for maximum spread. They also propose a hybrid approach: Rumor-mongering is very effective at spreading updates with little network traffic, but it does not provide guarantees that updates will reach all sites. As such, they then employ anti-entropy, which does make that guarantee but at the cost of much more traffic. When a discrepancy is realized, they then make the update a "hot rumor" again, combining the best features of both approaches. They also propose a method of managing deletions using Dormant Death Certificates, which nicely balances the amount of storage needed with the desire for correctness (i.e. not having old data reappear).

APPLICABILITY: Maintaining consistency among replicas is a problem that will never go away, and as such understanding many approaches and their pros and cons is useful. Perhaps as better epidemic models became available since 1987, they were able to better tune their parameters to make this approach even more effective. Regardless, the rumor-mongering approach will scale much better than direct mail or other simplistic approaches as the size of a given network grows. In today's world-wide network, this is important because we are operating on scales well beyond what might have been envisioned 30 years ago. Likewise, their awareness that the spatial distributions of networks matters is also still relevant, as we have very connected but very heterogeneous networks consisting of a wide variety of uplinks, each with their own speed and latency characteristics. Finally, I always find it interesting when an approach from a completely different field of study is applied to Computer Science, as they did by using epidemiology to specifically "design" the most infectious
disease, a.k.a. the best parameters for spreading database updates.

Summary:
- Anti-entropy spreads data between two randomly sites for maintaining data consistency.
- Pushes are used to pass data out while pulls are used to take data in.
- Death certificates are used to prevent deleted items from reappearing in the network.

Problem:
- Databases need to propagate updates to all other sites.
- The database should be eventually consistent, scalable, operational under bandwidth limit, and capable of spreading the absence of an item.

Contributions:
- The paper introduces anti-entropy and rumor mongering, both modeling the behavior of an epidemic. When a new update has occurred, then it will be propagated to other sites while the update is still hot. Then when many sites have the updated information, then propagation will slow down and be removed. The bigger impact of anti-entropy and rumor mongering is finding the connection between a real epidemic and databases. Therefore, research done on the spread of epidemic could be applied to databases and vice versa.
- The paper presents many variations, one of which is the push vs pull mechanism. Push vs pull are different directions of spreading data and has many other applications where one method is faster than the other under certain constraints. For example, push is better than pull under connectivity limitations. Therefore, other systems with similar network topology can refer to this paper to determine which direction might be better.
- The paper presents the idea of death certificates so that outdated existing items will not overwrite the newly deleted items. However, the death certificates will fill the database unless they are deleted eventually, therefore the death certificates will be kept in a few sites for an extended period of time. Death certificates are another link to epidemic models, because people who had gotten over a disease keep a death certificate in their immune system.

Applicability:
- The paper’s algorithms are randomized and no longer guarantees convergence. Although this presented algorithm is fast and uses a light network load, this algorithm is not applicable in systems that requires eventual convergence.
- Many ideas presented in this paper requires tuning. For example, death certificate timers should be set to not take up too much storage in the database while successfully being able to keep the network from spreading dead items). Another example is counter vs coin, which requires tuning to get an optimal replication without consuming too much network bandwidth. Therefore, due to the tuning required and how every system is different, it might be difficult to successfully apply the ideas of this paper to other distributed systems.

Summary,
In this paper the authors present an interesting mechanism to distribute updates to all the sites hosting a distributed database. These methods are based on the natural spreading mechanisms of disease epidemics.

Problems,
In a distributed database that is hosted at multiple sites and where any of those sites could receive the updates there are several constraints in spreading the updates to ensure data consistency. Some of the constraints are,
1. The time taken to update all the sites shouldn't be too large
2. The network traffic generated by the updates should be prohibitively large enough to become a bottleneck to network performance
Further, the mailing system used to spread the initial updates was consuming lot of network resources and hence the need to devise efficient update distribution mechanisms was paramount.

Contributions,
The authors show that simple randomized algorithms can perform as good or sometime even better than complex deterministic algorithms for distributing the updates.
These interesting mechanisms like rumor mongering and the improvements to the existing anti-entropy are successful in reducing the time and network bandwidth required to distribute the updates.
The authors also develop mechanisms to safely delete the database objects by sending death certificates for the corresponding objects.
Further, the authors also discuss the metrics required to evaluate the effectiveness of similar epidemic update distribution mechanisms.

Flaws,
The authors didn't model the characteristics of the workloads handled by the system. If the workloads had been carefully modeled, alternative options like making most of the sites read-only could have been explored which could have simplified the update process. There isn't mush discussion in the paper to suggest if the authors modeled the workloads or to the contrary.

All the randomized update mechanisms depend on a universally synchronized clock. Though it is possible to achieve the synchronization, the authors don't discuss the effects of the sites going out of sync and the necessary corrective mechanisms.

Relevancy,
In today's internet application architectures, most of the databases of very large systems are distributed across the globe and the mechanisms explored in this work along with the guidelines for evaluating such mechanisms will be useful for system designers working on similar architectures.

Summary:
The author tries to solve the problem of consistency in distributed algorithms by introducing randomized algorithms.

Problem:
The paper tries to solve achieving eventual consistency in replicated distributed database. The previous algorihms have high overhead attached to them and are neither scalable nor network efficient. This paper tries to provide randomized algorithms which are scalable and have less network overhead.

Contribution
They propose the idea of using randomized epidemic algorithms.
They try to obtain faster convergence by relaxing consistency restrictions.
They introduce 1) anti-entropy 2) rumour mongering algorithms.
They provide deep analysis of different variations of the algorithms and discuss various tradeoffs.
In addition to above, they provide hybrid algorithm for consistency.
They also provide a way for handling deletions in epidemic protocols using death certificate.

Flaws:
All the epidemic algorithms depends on parameters which are to be externally configured. This would mean the performance of the algorithms depends on the expertise of the person configuring the system.
The algorithms are tested using simulations.The performance of algorithms in real life is not given.
Algorithms are randomized and as such cannot give guarantees. Hence cannot be used for situations where critical data is involved.
Spatial aware algorithms do not perform well on all topologies.

Applicability:
Close relatives of algorithms discussed in this paper are widely used in propagating updates in wireless meshes, peer to peer systems.
The variants of the algorithms are used today to maintain eventual consistency in amazon dynamodb,openstack.

Summary:
The paper discusses implementation of a set of modified algorithms that can help distribute the updates among all the replicas in a replicated database system. They specifically talk about the issues with conventional techniques with special focus on problems faced at the Xerox and they come with algorithms that work similar to the epidemics. These new algorithms were implemented in the servers of the Xerox and they proved efficient with reduced traffic while maintaining database consistency. The paper also discusses the simulations results and also some practical implementations with different experimental setup and how they compare.

Problem:
The main issue being tackled is how to propagate the updates in a replicated database system. The primary concerns while updating are the time taken to propagate the updates and the network traffic created in doing so. The conventional mechanisms required guarantees from underlying communication and also maintaining consistent distributed control structures. Also they followed deterministic approach whereas this paper advocates randomized techniques which are much simpler.

Contributions:
• Direct mail performs the updates by mailing the updates which is timely but not reliable since mails can be lost.
• In anti-antropy method, the sites update by comparing against random sites; although reliable this is much slower than direct mail. An improvement to this is to compare only the recent updates and then verify the checksum for the entire database -this makes it much faster.
• Rumor mongering involves sites spreading the update to other sites when they receive an update and they keep spreading randomly until they see enough sites already updated. There can be cases where the updates are not propagated to every site.
• A key idea used in the paper is that using an epidemic algorithm will eventually infect the entire population.
• The complex epidemic mechanism proposed is based on the rumor spreading technique, although the updates traverse rapidly there is a small probability of failure with this approach wherein some of the sites are not updated.
• Anti-entropy is used in combination with the epidemic mechanism as a backup mechanism and this solves the issue of probability of failure.
• Deleting an item is not simple with anti-antropy or rumor mongering as the absence of the item(by removing local copy) may not propagate to all sites and on the contrary the stale copies at other sites might revive the deleted item. This is solved by using the death certificate when an item is deleted, which propagates like data and old items are deleted when this is encountered.
• Improvements can be made to these mechanisms by considering the spatial distribution such that the random selection can be made while giving consideration to the spatial properties of the sites. This helps in reducing network on critical links.

Applicability:
The whole idea of using epidemic mechanisms is very different and seems to have solved the main issues of traffic and delays. The results are based on implementations done around late 1980s but the ideas can well be applicable today. An exact implementation wouldn’t suffice today’s needs since many of the services (for example banking etc) would demand much faster updates but some other non-crucial services can still use the concepts.

Summary
The authors introduce several randomized algorithms for data updating in replicated Database to achieve consistency. Furthermore, quantitative analyses and some design tricks are provided.

Problem Description
The authors observed the domains updating on Clearinghouse servers, which using both direct mail and anti-entropy, experiences a very slow propagation. During their further investigation, they found that remailing step is unworkable for a large network, and anti-entropy could still lead to network link overloaded. Therefore, their goal is to reduce the network load and propagation delay, and provide replicas toward consistency.

Contribution
1.In order to avoid two complete copies of the database, the author propose site shold maintain a recent update lists before comparing checksums. This strategy will reduce the comparing overhead and the load of network.
2.The idea of combining rumor mongering and anti-entropy to ensure update successfully is smart.
3.To prevent the “resurrection” of the deleted data, they suggest using death certificates, which can spread like ordinary data.
4.Because the sending a update to nearby site is much lower cost than that to distant side, the authors explore spatial distribution of sites to reduce the network traffic load.

Applicability
The epidemic/gossip protocol in this paper is well applicable for today’s computer network(e.g. P2P). For modern distributed system, Amazon's Simple Storage Service (S3) also was implemented such protocol in their system.

Summary
The authors introduce several randomized algorithms for data updating in replicated Database to achieve consistency. Furthermore, quantitative analyses and some design tricks are provided.

Problem Description
The authors observed the domains updating on Clearinghouse servers, which using both direct mail and anti-entropy, experiences a very slow propagation. During their further investigation, they found that remailing step is unworkable for a large network, and anti-entropy could still lead to network link overloaded. Therefore, their goal is to reduce the network load and propagation delay, and provide replicas toward consistency.

Contribution
1.In order to avoid two complete copies of the database, the author propose site shold maintain a recent update lists before comparing checksums. This strategy will reduce the comparing overhead and the load of network.
2.The idea of combining rumor mongering and anti-entropy to ensure update successfully is smart.
3.To prevent the “resurrection” of the deleted data, they suggest using death certificates, which can spread like ordinary data.
4.Because the sending a update to nearby site is much lower cost than that to distant side, the authors explore spatial distribution of sites to reduce the network traffic load.

Applicability
The epidemic/gossip protocol in this paper is well applicable for today’s computer network(e.g. P2P). For modern distributed system, Amazon's Simple Storage Service (S3) also was implemented such protocol in their system.

Summary:
The paper presents the problem of maintaining different database replicas consistent in a global distributed heterogeneous setting. The authors aim for 'Eventual Consistency' for update & delete operations. Existing deterministic algorithms depend on various guarantees from underlying communication protocols and maintaining control structures.The paper describes randomized algorithms and variations(such as Counter-Feedback-pull rumor mongering etc), spatial distributions to make the update/delete propagation faster.

Contributions:
1. Direct mailing may fail due to message loss or incomplete information about the network.
2. Apply concepts from epidemiology to update propagation and achieve consistency.
3. Anti-entropy can use a push, pull or push-pull strategy. Pull leads to faster convergence and push performs significantly better under connection limitations.
4. Improvements in anti-entropy: Checksums and recent update lists used to efficiency of anti-entropy.
5. Rumor Mongering: Treating an update as 'hot rumor' and randomly chooses a site to share the rumor. Main Drawback is some updates may not reach all the sites.
6. Peel back version: Rumor mongering with anti-entropy as backup can spread the update completely to all the sites.
7. Discussion about variations in techniques: Blind v/s Feedback, Counter v/s coin and push v/s pull to minimize residue, reduce traffic and delay.
8. Deleted items are replaced by death certificates to tackle the issue of propagating old copies of deleted items.
9. The idea of Dormant death certificates to avoid obsolete data items from getting back is interesting.
10. Reduced traffic (specially on critical links) by using spatial distribution.

Discussion:
The paper presents a very interesting analogy with the field of epidemiology and is very interesting to read and correlate. The paper does not talk about handling failures such as failure in delivering updates and how would epidemic algorithms handle that. As mentioned in the paper, work needs to be done to improve performance in pathological networks. Since the targeted system aims for eventual consistency, it cannot be applied to systems requiring time critical updates. Modern day distributed systems such as Amazon Dynamo follow eventual consistency model.

Summary:
This paper introduces various randomized algorithm to achieve consistency in replicas and it also provides analytic modeling of these algorithms. It compares three epidemic algorithm to spread updates: Direct mail, anti-entropy and rumor mongering.

Discussion:
In a distributed system, maintaining consistency in replicated sites is a huge challenge. The main goal is to reduce time delay and network traffic while also guarantee consistency. The author describe rumor mongering algorithm to effectively spread updates in which each infective site(site which wants to share an update) randomly choose an other site(susceptible) and share its update. If it found that this update(rumor) is already know to many sites, it stop spreading it. This algorithm provides eventual consistency in which with enough time every site which get the update. This algorithm is more efficient than the simple epidemic algorithm such as: i) Direct mail (Sending the update to all sites immediately) which is unreliable because message delivery can fail and also a site may not know of all the other sites and; ii) Anti-entropy (Randomly resolves differences with other site) which cause huge network traffic and delay.

Contribution:
1. The paper provides a good analytical models of the different epidemic algorithms and their variations. Specifically I like how the author explain why pull based model is good in anti-entropy and in rumor mongering it depends on the update rate in the system (pull is preferred if update rate is high otherwise push).
2. Using checksum and update list is a nice technique to reduce network traffic in anti-entropy algorithm.
3. The author explain nicely the criterias of judging various rumor mongering algorithm (Residue, Traffic, Delay) and how different parameters can be tuned for better spreading of updates. It then presented various methods to improve upon these criteria. For eg, using counter with feedback improves the delay.
4. There is a little probability in rumor mongering that not all sites get the update. Using anti-entropy as a backup is a good idea.
5. Deleting an item is a challenge in a replicated system. Death certificates are used to remedy this problem, in which when a death certificate meets an obsolete data, that data is deleted. However, deleting death certificates now is a challenge. Authors used dormant death certificates for this purpose.
6. Finally, it describes how spreading update while using network topology to favor nearby neighbors reduces network traffic. Using uniform distribution causes high average traffic on some common slow links (for eg. single link between two regions). Therefore, by sorting the sites in increasing order of distance and then randomly choosing sites with some function of distance reduces network delay.

Applicability:
Maintaining consistency in distributed system is still a challenge today. This is a very old paper which provides foundation of rumor or gossip based algorithm which work very good for spreading updates. It is still very much applicable today where every site in a distributed system can write to the data (I am not sure that these algorithm are much applicable in distributed system which uses a master-slave configuration). Other than that, it also provides a good analytic model for gossip based algorithm in a distributed system using which new algorithm or variations can be derived and simulated.

Summary:
Authors discuss the application of epidemic algorithms for spreading updates in replicated storage. They analyze techniques for their impact on convergence time, residue (% replicas without update after convergence), and generated traffic.

Problem:
Synchronizing replicated databases, where many or all replicas are modifiable, and all replicas have the same functionality (no hierarchy, no master-slave). Key goals are (1) eventual consistency, (2) reasonably fast convergence with low residue, and (3) minimal network traffic.

Solution(s):
Two epidemic algorithms are presented. The first, anti-entropy, consists of each replica routinely comparing its contents with a randomly chosen partner and resolving differences by timestamp. This is a simple epidemic, and does provide eventual consistency.

The second, rumor spreading, consists of treating a new update as a 'hot rumor'. Any node that knows a hot rumor will spread this update to another random node each cycle. If the recipient already heard the rumor, the sender removes the update from its hot list with probability 1/k (tunable). Because a node may eventually stop propagating an update, this is a complex epidemic, and also it does not guarantee eventual consistency.

Some improvements:

  • Anti-entropy comparison using checksum. Made more effective when a linked list of updates are kept in order of most recently pushed/pulled from another replica. Batches of hottest updates can be shared until checksums match.
  • Counters: Instead of 1/k chance of removing a rumor, remove after k unsuccessful shares. This effectively enforces the expected convergence time of using 1/k but avoids long tail of probabilistic behavior.
  • Non-uniform distributions: To reduce (a) traffic per link slightly and (b) traffic per critical link tremendously, can give higher probability to closer nodes. Convergence degrades slightly.
  • (Dormant) Tombstones (Death Certificates): Needed for replicated data with no authoritative node to avoid reintroducing deleted objects as new data.

Contributions:
+ Theoretically sound random methods for data synchronization. Tunable for varying goals/results.
+ Provided as many improvements for anti-entropy (the eventually consistent method) as for rumor spreading. Did not ignore the "backup method".
+ Notable is the dormant tombstone. If each node could devote X% of storage for tombstones, and dormant tombstones are kept on some fixed number of nodes (3-5), then storage for tombstones will scale linearly with number of nodes. Storage for tombstones could easily grow to be greater than that for the live data, if need be.

Discussion:
Random algorithms are a personal favorite, as they are often very simple to implement (as these are), and can prove surprisingly effective. I think the authors effectively solved their problems, especially regarding traffic generated by these methods. I don't see why the methods could not still be applied with success wherever updates need to spread through a distributed system, especially where nodes may not all know about every other node. I think the area with most room for additional work in epidemic algorithms is the non-uniform distributions, making them robust to ugly or pathological cases.

Summary: This paper describes several algorithms for propogating updates among all the replicates in many sites of a network to maintain consistency. These algorithms are simple and require fews things of the guarantees of the underlying communication. The replicates in the algorithms satisfies eventual consistency.

Problem: There is a database with many replicas at thousands of sites in a hetegogeneous, slightly unreliable and slowly changing network. Each database update is injected at a single site. The update should be propogated to all other sites. And the consistency among replicas means the content should be the same after long enough time of the last update on the network. We need an algorithm of the propogation that is efficient and robust under the constraints above. It should trade-off between (1) the total time needed to propogate the update to the whole network; and (2) The total traffic generated during the propogation.

Contributions:
(1) Direct mail: once a site receive an update, it notify all other sites. Pros: it is efficient if the propogation is successfully completed. Cons: the notification may fail sometimes (e.g. queue overflow or the site is inreachable). The site first received the update becomes the bottleneck of the network.

(2) Anti-entropy: every site regularly choose another site randomly and resolves all the disagreements between the databases of the sites. Pros: The propogation is reliable and it guarantees the eventual consistency. Cons: The comparison of the databases is traffic consuming and slow. To alleviate this issue, they use inverted index of database with timestamps and exchange updates in reverse timestamp order.

(3) Rumor mongering: every site with the update will periodically choose another site as random and ensure the site receive the update. When a site find many times that the random choosed site already has the update, it will stop propogating. Pros: each cycle of rumor mongering consumes less than anti-entropy. Cons: with possibility finally not all sites receive the update. To ensure that finally all sites be consistent, this paper proposes to combine anti-entropy and rumor mongering.

(4) The delete update is dealt by Death certification to avoid incorrect propogation of the outdated deletion data operation. The death certification is time-consuming, a variant called dormant death certification is proposed. It has a timestamp associated with it to show when the certification should be deleted.

(5) spatial distribution of nodes (e.g. linear, rectangle or graph-based) are introduced as the random distribution in the algorithms above. So algorithms will select nodes with spatial consideration.

Application: It is really early to think about the consistency problem of replicas among sites at 1987. These algorithms is very simple and guarantees evantual consistency. Currently, the gossip protocol and Amazon's dynamo use the idea of epidemic algorithms.

Summary:
In this paper the the use of mathematical models of epidemics (from epidemiology literature) are presented and analysed to provide eventual consistency among databases replicated amongst a multitude of sites. Specifically anti-entropy and rumor mongering are the two epidemic models (with various implementation modifications) drawn from, with a deterministic algorithm tested as well.

Problems:
-Overall problem is ensuring eventual consistency among replicated databases with statistically good properties.
-It is far too costly to ensure strict consistency semantics in a large (growing) system with many sites of replication.
-Network overhead in the deterministic propagation model is excessive and expensive (it has the potential to overwhelm the network)
-Random nature of the epidemic algorithms allow for a zero probability chance that not all replicated databases will convergence with respect to updates, additions, and deletions.

Contributions:
-The epidemic models and associated algorithms that attempt to simulate an epidemic (update/deletion/addition) moving throughout a population (the set of nodes associated with the system).
-The implementation, simulation, and analysis of the three algorithms presented at the beginning of the paper (direct mail, anti-entropy, and rumor mongering) and the optimizations made to each respectively to improve the rate of convergence.
-The push pull algorithms by which two replicated sites can converge and the analysis of push and pull respectively to identify the strengths and shortcomings of each in different scenarios.
-The use of death certificates and the slight modification to include the notion of dormant death certificates to overcome the problem of allowing a zombie (should be deleted) key:value pair to propagate back into the replicated databases.
-Spatial distribution analysis and the findings that placement of replications make a huge impact on performance (time to converge) of the system.

Application to Real Systems:
The concept of using epidemic models and their associated algorithms is an extremely useful notion and is in fact one that can still be found in real distributed systems today that use some level of replication. However, it seemed that the paper was making an assumption that replication would occur at a large number of sites (therefore creating the high network communication cost problem) where in practice (from my understanding) databases are only replicated two or three times at most.

Summary:
The authors describe and analyze 3 algorithms for achieving eventual consistency in a replicated database system - direct mail, anti-entropy, rumor mongering. The latter two are motivated by the theory of epidemics.

Problem:
The task at hand is to propagate updates to DB replicas to arrive at a consistent state. Since Xerox has several ethernets that are connected by gateways, the network becomes a bottleneck. Direct mail is not efficient since it does not guarantee complete propagation. Anti-entropy causes significant traffic congestion and can affect other applications trying to use the network and is also slow. Just rumor-mongering is not reliable. Thus the paper analyzes the various algorithms and tries to provide ways to achieve consistency with some performance guarantees.

Contributions:
1. They apply epidemiology to come up with effective ways to infect all candidates.
2. They compare various mechanisms such as blind vs feedback, counter vs coin, push vs pull and arrive at the better choices for improving performance.
3. The way they deal with deletes is novel. They use death certificates to propagate the delete and also introduce the concept of dormancy to prevent unwanted re-emergence of a deleted item.
4. They explore spatial locality to reduce network overhead and improve performance.

Applicability:
The way deletes are handled is quite novel. The Swift object store from OpenStack uses almost the exact same way to deal with deletes. It marks the deleted files with ‘tombstones’ and allows this to propagate to the replicas. Swift, however, uses the push method to achieve eventual consistency for updates which the authors here say isn’t as efficient as pull. The difference though is that Swift has a small number of replicas and each replica knows exactly how many other replicas it needs to sync with which makes the problem simpler. With regards to spatial locality, I think a better metric to use would be the available bandwidth. Also, the networks are much faster now, so not sure if this is still applicable.

Summary

In this paper the authors talk about maintaining mutual consistency among multiple sites that have a replicated database with in the context of the Clearinghouse server of the Xerox Corporate Internet. They present analysis, simulation results and the practical experience gained using different deterministic (Direct mail) and probabilistic (Anti-entropy and Rumor Mongering) methods based on the epidemic model. They have implemented the complex epidemic model (mathematic theory of epidemiology) of rumor mongering with efficient optimizations taking into account the changes of updates, deletes and spatial distribution with the goal of making eventual consistency algorithms that are efficient and robust and that scale gracefully as the number of sites increases.

Problem

With respect to achieving eventual consistency Direct Mail (multicasting) was not entirely reliable since individual sites do not always know about all other sites and since mail is sometimes lost. It is also susceptible bottleneck and results in redundant increase in network traffic. Even though Anti-Entropy (everyone picks a site at random and resolves differences between it and its recipients) is more reliable as compared to Direct Mail, but it required examining the contents of the database and so cannot be used too frequently. In fact Anti-entropy propagates updates more slowly than direct mail. Rumor
mongering on the other hand can be more frequent than anti-entropy cycles because there required fewer resources at each site but there is some probability that an update will not reach all sites.

Contributions

  • Application of an existing mathematical theory of epidemiology to consistent database replications.
  • The algorithm described in this paper does not depend on the various guarantees from underlying communications protocols and on maintaining consistent distributed control structures as compared to its predecessors. There merely depend on the eventual delivery of or repeated messages.
  • Modified the exiting randomized algorithm to guarantee eventual consistency.
  • Improved the Anti-entropy method with the introduction of checksums and timestamps
  • Solved the “resurrection” problem (when a truly deleted data, reappears and a valid data due to inconsistent propagation mechanism) with the death certificates and its modifications.
  • They provided mathematical models to analysis the performance of the model.
  • Combined peel-back and rumor mongering to keep updates in a doubly-linked list to removed failure probability, avoid extra index and make the model behave well when a network partitions or joins.
  • Analyzed the effect of spatial distributions on the model.
  • Provided extensive experimental and simulation results.
Applicability

The randomized solutions presented in this paper are very application today since Epidemic algorithms and gossip protocols along with the concept of death certificates are widely used in modern systems like Cassandra.


Summary
The paper discusses and analyzes epidemic algorithms to achieve consistency among databases at different sites when an update occurs on one.

Problems to solve
• Reduce network traffic when trying achieve consistency on an update.
• The Clearinghouse service, which was referred to, utilized direct-mail and anti-entropy update strategies on an update. This led to increased mail messages due to constant exchanges between sites which led to network breakdown.
• Anti- entropy needed to be further optimized as it caused overloading of the network as it required frequent examining of database contents.
• Deletion of an item from database at a site. Avoiding resurrection of deleted item.

Contributions
• For an insertions update the paper discusses three strategies
Direct Mail which the paper deems unreliable due to discarding on queue overflow or due to large wait time due to inaccessibility of destination site.
Anti-entropy is a reliable but expensive strategy in terms of network. The paper proposes utilization of a checksum, a time window and maintenance of a recent update list. This reduces network considering an optimum window is chosen. Complex epidemics of which rumor spreading was explored. Which provided better network usage but there is a possibility that not all sites will receive that update.
• Modifications to rumor spreading were explored. Rumor spreading sends the rumor across the network and loses interest if destination site has data. A blind approach was proposed which would lose interest without interaction.
Another is use of counter to keep track of state and loose interest after a certain number of unnecessary contacts.
• Rumor spreading utilized pushes and pulls to update data. Performance analysis done on each of the operations pulls being more efficient.
• Tradeoffs between the strategies were focused on.
• Rumor mongering has possibility of non-zero residue so need a back up to spread the update on the sites left. Anti- entropy being a candidate for the backup.
• In order to avoid resurrection of a deleted item and handle deletes the concept of death certificates was introduced.
• According to simulation data provided spatial distribution proved to be better than uniform distribution to reduce network traffic.

Applicability
Achieving consistency on updates efficiently and with minimum network usage is an important aspect in distributed systems. The proposed strategies have alleviated the problem to an extent.

Summary:
This paper introduces several randomized algorithms to maintain mutual consistency among the sites of a replicated database. The algorithms discussed are analog to the concept of epidemic, and the pros and cons of them are quantitatively explained. The practical usage of them in real system is also introduced.

Problem:
When the database is replicated at many different sites in a large, heterogeneous, slightly unreliable slowly changing network, it’s too costly to achieve strict consistency (with transaction-based mechanisms) between all the replicates. In the other hand, the relaxed form of consistency: each update injected at a single site must be propagated to all the other sites or supplanted by a later update in reasonable time, has shown to be useful. The problem this paper try to solve is how to design the efficient, robust and scalable algorithm that propagate the updates between the sites and make them eventually consistent.

Contributions:
Two randomized algorithms are discussed: anti-entropy, and rumor mongering. They are compared with direct mail algorithm.
Compared to other deterministic algorithms to maintain consistency in replicated database, randomized algorithms are easier to be implemented. And they don’t depend on various guarantees from underlying communications protocols and on maintaining consistent distributed control structures.
Anti-entropy can recover automatically in case of failure in a large network, and it can guarantee that the updates in a single site will propagate to all other sites. But it’s very expensive, so optimizations are required: e.g. checksum, recent update list or inverted index of the database by timestamp.
Rumor mongering algorithm can spread the updates quicker than anti-entropy algorithm, and it will introduce very low network traffic, but it has the possibility to fail. We can run anti-entropy infrequently to back up the Rumor mongering algorithm to eliminate the fail probability.
Deletion is a challenge operation for both randomized algorithms. Death certificates can be used to make sure the item deleted in one site will be deleted in other sites as well, but we need to maintain death certificates in every database for long enough time, which waste storage space. Dormant death certificates are used to reduce the required storage space.
This paper also discusses the effects of spatial distribution of sites on the performance of the epidemic algorithms. With anti-entropy, using a spatial distribution can significantly reduce traffic on critical links. Rumor mongering is less robust than anti-entropy against changes in spatial distribution and network topology.

Discussion:
Though the epidemic algorithms discussed in this paper seem to be promising, I am still not convinced about the advantage and necessity of maintain the consistency of replicated database with a large number of sites in a non-centralized way. For system with more read operations than write, Facebook’s caching system seems to be better. For system with more write (delete) operations, the epidemic algorithms are not strict consistent, and they even have the probability to fail.

Summary:
The paper presents some algorithms and techniques for achieving eventual consistency in a replicated database where updates can occur in any replica. They use techniques from the area of epidemics in achieving this.

Problems:
The problem the authors are trying to solve is that to make a distributed database systems with replicas at multiple geographical locations to be consistent with one another. Their aim was to achieve eventual consistency using randomized algorithms for distributing the updates that take place at multiple locations. Other problems the users were tyring to solve were to manage deletion in a distributed environment and to consider the spatial distribution of the network topology into account.

Contributions:
1. The major contribution of the paper is introducing a randomized way of spreading updates in a replicated database instead of the deterministic approach.
2. The anti-entropy method, a type of simple epidemic, where 2 replicas resolve differences among them by sharing the contents of their database was definitely much better than the direct mail approach that was used previously to propagate the updates between database replicas. Also this method would eventually affect the entire population.
3. The method of comparing the checksum of the database contents instead of comparing the entire database and also using the concept of recent update list helped improving the efficiency of the anti-entropy algorithm.
4. The idea of using theories from epidemology to come up with the rumor mongering algorithm showed that ideas from an unrelated field like biological sciences can be used in distributed systems.
5. The technique of combining rumor mongering and peel back (a variation of anti entropy) using doubly linked lists to make sure that all the sites recieve the update with a probability of one was interesting.
6. The introduction of death certificates for making sure that items are deleted properly in a distributed database was a great idea. They made sure that delete was handled similar to an update to prevent the deleted items being ressurected by some previous updates.
7. The use of spatial distribution instead of uniform distribution with anti-entropy helped reduce the traffic on critical links

Relevance:
Many gossip algorithms that are in use currently may have been inspired from the idea of rumor mongering algorithm. Also the use of randomized algorithms are still used in routing decisions in computer networks. The concept of death certificates may still be used in many distributed key value stores.

Summary: Paper presents the design and analysis of an epidemic like style of algorithms for ensuring data consistency across replicated databases. The authors show that this style of algorithm can, with high probability, ensure consistency with low convergence time and lower network traffic than distributed mail with anti-entropic backup.

Problem: Ensuring data consistency across machines connected via a slow and unreliable network is hard to do. Techniques have to be used in order to reduce excess traffic as much as possible while still ensuring that all machines on the network will eventually converge on a stable state that is consistent. Old, deterministic techniques to ensure consistency (such as direct-mail) will send n messages an update and will assume that the network is reliable and that machines are alive.

Contributions: The authors propose a set of epidemic algorithms and a set of optimizations for those algorithms and quantitatively evaluate them. They use Xerox's corporate internet as the launching point for discussing said optimizations. For example, since their current network at the time had a “critical” link going across the Atlantic, they were very interested in reducing the number of messages that traverse that link. They hit upon the idea of taking spatial distribution into account when machines choose another machine to initiate a push-pull with. They actually have implemented one of these randomized epidemic algorithms on their network.

The paper went through a very thorough analysis of all conceivable avenues that the system might end up in an inconsistent state (even though sometimes this possibility would be exponentially small as was the case with rumor-mongering). It had a nice solution to the death of data and some of the interesting problems that comes with how one makes the deletion of data consistent across machines (the answer was death certificates and dormant death certificates).

Applicability: While network resources have greatly increased since the time that this paper was written, the ideas presented in the paper should still be relevant to the modern day. With regards to replication, consistency among replicas is one of the major problems and this paper outlines problems with regards to networks that companies are still dealing with to this day. It was a very nice idea to use ideas and mathematics from epidemiology and apply it to this problem of consistency.

The authors discuss two major solutions named anti-entropy and rumor mongering for the problem of database replication in a distributed environment with the goals of reducing the network traffic and ensuring mutual consistency. The algorithms suggested here are randomized algorithms whose ideas are derived from the theory of epidemics.

Major Contributions :

1. Avoid use of direct mailing which incurs a lot of network traffic and all the recipient sites are not always known. The idea of using concepts from epidemics is quite interesting.
2. Anti-entropy relies on the idea that a site chooses another site at random and resolves differences in the database. This has high propagation latency but is reliable.
3. Rumor mongering selects sites at random and spreads the update. Those sites repeat the same and this process continues until a stage is reached wherein almost all sites have seen the update. This is quite fast but does not ensure that the update would reach all sites.
4. The idea of improving the anti-entropy algorithm by only exchanging the recent update lists and comparing the checksums was a good idea but was highly dependent on the time window for the update lists. They tried to overcome this problem by introducing a technique called peel back which required each site to maintain an inverted index of the database in a timestamp order which accounts to huge memory overhead.
5. The anti-entropy could use a push, pull or a push-pull strategy. The pull strategy however converges faster. However, in case of limited connections, the performance of push is better.
6. Their performance experiments indicate that rumor mongering based on a counter with feedback had resulted in the minimal susceptible sites.
7. An interesting idea here was to start off with rumor mongering and support it with the reliable peelback strategy. Each site used a doubly linked list with a priority mechanism wherein the hot rumor (an useful update based on feedback) is moved to the front of the list and cold ones are moved to the end.
8. The problem of replicating the deletion of an item has been handled using death certificates. However, there is a clear memory overhead of storing these death certificates for all deleted items for a period of 30 days. It is not known how this fixed time period is determined.
9. They ensure that sites which have probably gone offline during the propagation of death certificates are still consistent by retaining dormant death certificates on random sites and activating them when it encounters a deleted item.
10. The authors have performed several simulations and conclude that choosing a good spatial distribution can lead to lesser network traffic in critical links.

Relevance:
These algorithms are quite probabilistic and it is not sure if these would definitely result in mutual consistency amongst all the sites. These techniques for replication are not applicable to transactional databases. However, they are applicable to databases that accept relaxed consistency. Gossip protocols for replication are applicable even in current systems. NoSQL databases make use of these techniques. A variation of the anti-entropy technique is used in Cassandra.

Summary: This paper presents several randomized algorithm to handle consistency problem in a replicated database system. The proposed algorithms can achieve consistency fast without making any assumptions of the network, and produce relatively small network traffic.

Problem: In a distributed replicated database system, when one site receives an update, it needs to propagate the update to other sites. One way of doing this is by direct mail, i.e. sending the update to every other site. This is not good because (1) a site does not necessarily know all other sites; (2) it is fragile to partial network outrages. A second way is called anti-entropy. A site will periodically exchange the whole database with a randomly chosen partner. This method also has shortcomings: (1) it will consume a lot of traffic due to the whole database exchange; (2) some site can possibly never get the update.

Contributions:
(1) The proposal of epidemic algorithms that can achieve both fast convergence and small network traffic. Mostly those algorithms are similar to the anti-entropy algorithm, but the will also include a step to make an infected site (a site that will propagate its update to others) become removed (an updated site that will no longer propagate its update to others).
(2) Using death certificate to handle deletions. When a deletion operation is made, the entry in question is not removed; instead it is marked as dead. This ensures that removed entries will not be resurrected by other sites. Then some mechanisms are employed to remove these death certificates when all the sites are aware of the deletion.
(3) Use spatial to further reduce traffic. Instead of picking a site uniformly at random, we can picking sites based on a probability distribution. The authors showed that by picking some distribution, the overall traffic can be further reduced while the time of convergence is nearly unchanged.

Applicability: This work is applicable to most "stateless" database applications. By stateless I mean any update to the database will not require knowing the current state of the database. This is because all the algorithms discussed in the paper cannot provide a way to tell if the database is the most recent copy.

Summary:
In this paper authors have presented randomized algorithms inspired from theory of epidemic, to distribute updates in a replicated database to achieve eventual consistency. The presented randomized algorithms rumor-mongering and anti-entropy, require few guarantees from underlying network and produce less network traffic.

Problem:
In replicated databases, at that time the popular algorithm for distributing updates was "direct mail" (used in Grapevine system), but it had several short-comings with it. Authors, have tried to solve these problems with their randomized algorithms' approach.
- The algorithm is deterministic, it is to be aware of all the replicated servers before-hand.
- The post-mail mechanism is unreliable, messages may get dropped due to queue overflows.
- If destination node is inaccessible for a long time, it may never get the updates.
- The network traffic generated in the update distribution process is high.

Contributions:
- Idea for relaxing the strict consistency requirement which needed faster convergence, to eventual consistent state.
- Proposal of rumor-mongering algorithm for distributing updates, with anti-entropy algorithm as backup for complete spread of update.
- Death certificate and dormant death certificate approach for preventing "resurrection" of obsolete copy of data after deletion.
- Idea of considering spatial distribution of servers for selecting partner server in randomized algorithm. This approach leads to reduced network traffic on per link level than only at global level.
- Use of incremental checksum for quickly determining if there are difference in updates between two nodes. This removed the requirement for shipping entire data just for comparison.

Flaws:
- Authors have not considered the possibility of simultaneous conflicting updates from two different servers.
- Authors have not presented any empirical result comparing performance with the then popular deterministic algorithms. The comparison would have helped in deciding the trade-offs in choosing consistency and performance.
- In the case of deletes, if the node is down for longer time than the time of holding the dormant death certificate, then it is still possible the obsolete copy may re-appear.

Applicability:
The idea for relaxing consistency requirement for performance improvement is quite popular today and applicable to websites like social networks. But the randomized algorithm which is backed up by anti-entropy (requiring to ship entire data) is not scalable to the size of the distributed systems and volume of the data stored, in current systems.

Summary:
In biological sciences, epidemiology is the study of the distribution of diseases; this paper pulls from that idea to create algorithms that can quickly replicate data through a distributed system. It expands upon ideas laid out in the Grapevine system and related systems to efficiently pass updates to the entire network while remaining fault tolerant.
Problems:
Coordinating data replication among a large distributed system where network failures and network bandwidth limitations are common factors can create unimaginable complexities when faults do occur. The epidemic algorithms introduced in this paper describe a simple strategy for recovering from these faults in replication without adding unneeded strain on the already limited system.
Contributions:
Two major contributions that were introduced were having the algorithm managing replication manage consistency as opposed to relying on underlying communication protocols, and using a randomized algorithm, not deterministic, to distributed updates to neighbors. Epidemic algorithms, or in present-day terms gossip algorithms, are known for their availability while giving up their guarantee of consistency. The two introduced are as follows
• Anti-entropy compares replicated data between neighbors and reconciles differences.
• Rumor-mongering pushes or pulls updates from neighbors, called hot rumors, until it starts receiving the same update back from them. Once this occurs, these rumors are no longer hot and they stop being broadcast. Eventually the entire system stops broadcasting and at that point has converged.
There are several optimizations made to both these algorithms to speed up convergence as well as minimize the percentage of the system that didn’t receive the update or residue, These included
• Receiving feedback from neighbors versus “blindly” assuming they received your updated decreased residue.
• Pulling instead of pushing sped up convergence and decreased residue.
• Connection-limiting updates makes pushing rumors perform significantly better.
• Peel-back as a method for reconciling two databases by comparing checksums of update logs along a doubly-linked list.
Along with these optimizations, the researchers tried combining both algorithms, so when updates weren’t consistently distributed with rumor-mongering, either anti-entropy or another round of rumor-mongering could resolve the fault quickly. Also, deleting data from the system had to be solved with “death certificates” or delete journal entries, so it wouldn’t resurrect later.
My main gripes with the system were that they could introduce a single-point of failure where the initial update was introduced. Also, much like probabilistic data structures (ie. Bloom filters), this system was probabilistically consistent and never 100%. Using a quorum to read data from it though could possibly solve this issue if consistency is an requirement.
Applicability:
When I interned on the Simple Storage Service (S3) team at Amazon Web Services (AWS) I first learned about these algorithms as they were used for their Discovery and Failure Detection Daemon (DFDD). In order to spread health information throughout the hundred thousand servers quickly Gossip protocols were used to share the heartbeat information among peers where it was eventually aggregated. Specifically, anti-entropy and rumor-mongering were used as they could converge in a little under 20 seconds and consistency was not much a concern for this system. Another similar example of rumor-mongering being used in everyday software is in Hashicorp’s open-source Serf which similarly detects failures by absent heartbeats by using a Gossip protocol.

Summary: In this paper, the author discusses their approach
of maintaining the consistency of a database that is replicated
cross many geologically distributed clusters. The level
of consistency that they want to guarantee is that eventually,
with a high probability, all sites will see the recent
version of updates. They discuss their randomized treatment
to this problem, and analyze the performance inspired
by classic epidemics models.

Problem: To build a distributed database system that
each site will eventually see the result, naive approaches
would be distribute the update every time a node saw it. This
baseline has multiple problems:
- The network overhead is too large.
- Might not be robust enough to node/network failure.

Contributions: The contribution of this paper is to propose
what they called rumor mongering approach. In this approach,
there are three types of nodes, the ones that saw the update
and want to broadcast, the ones that do not know the update,
and the ones that saw the updates but do not want to broadcast.
By randomly selecting neighbor nodes to broadcast updates
and diminishing the probability of continuing broadcasting,
eventually, with a large probability, all nodes will see
the most recent update given enough time.

Compared with baseline approaches, e.g., direct mail, and
anti-entropy, rumor mongering uses the network in a more
efficient way.

The author also consider technical details, e.g., how
to deal with deletion by maintaining a death certificate,
and the tradeoff on how long and how many such certificates
to maintain and their impact on network utility. Other approaches
like using anti-entropy as the back-up for direct mail
or rumor mongering are also discussed.

Applicability: There are interesting limitations that I
think might apply to this work. First, the consistency
of "database" that the authors discussed, in the classic
database view, is on rather limited operations. That is,
only operation on a single record. For more complicated
"transactions", it is not clear how to extend the proposed
approach. Second, the study is based on old slow network.
It is interesting to revisit to see how the tradeoff would change
on modern fast networks. In my mind, this could promote
the usage of a more hybrid approach.


PROBLEM:
* Keeping a large number of distributed replicas consistent in the face of updates
* Writes are allowed to all objects at all replicas (unlike a master-slave model)
* Network is non-uniformly connected, with critical links between clusters
* Want to minimize both replication delay and network traffic, with eventual consistency

SUMMARY: The authors analyze and simulate existing schemes and propose new randomized rumor mongering algorithms based on epidemic models. They also propose death certificates as a solution for handling deletions, and some approaches to handle non-uniform network topologies.

CONTRIBUTIONS:
* Rumor mongering needs limited guarantees about message delivery from the network
* Introduced an "almost-surely eventual consistency" model achievable and show how their algorithms achieve it
* Defined and provided analytical models to compute various performance metrics like residue, network traffic and delay
* Studied the importance of feedback in regulating propagation frequency, efficient deterministic variants of parts of their randomized solution (coins vs counters) and the effect of other aspects like connection limits, hunting etc
* Analyzed the difference between push, pull and push-pull schemes, and showed that pull works much better than push
* Introduced checksums along with recent updates list to reduce traffic in anti-entropy
* Introduced the notion of death certificates (with creation and activation timestamps) to handle deletes properly (allowing subsequent creation/update on the same key)
* Analyzed linear, rectangular and graph-based spatial distributions of nodes, and how non-uniform randomized variants of their algorithms perform on these topologies
* Showed how push and pull variants of rumor mongering with non-uniform distributions can actually perform worse than uniform distributions in terms of residue for some pathological networks
* Emphasized the need for anti-entropy schemes to back up rumor mongering to guarantee eventual consistency

IMPACT/APPLICABILITY:
* Epidemiological models (gossip protocols) are now widely used, both in distributed systems (key-value stores, BitTorrent) as well as network routing (ad-hoc and wireless sensor networks). They are robust to failures of nodes and links in the network. They are often efficient (in terms of network traffic and delay before reaching consistency) and simple to implement.
* Distributed key value stores like Cassandra use anti-entropy as well as death certificates as a way to ensure eventual consistency, particularly for deletes (see http://wiki.apache.org/cassandra/DistributedDeletes).

PROBLEMS:
* The algorithms crucially depend on the assumption that clock skew across nodes is negligible, which may not always be true.
* In case of inconsistent updates, it may be useful to have better resolution schemes than simply preferring the newer one, for instance picking max.

Summary: this system attempts to address the problem of replicating a
database across multiple hosts in a heterogeneous network topology. They use a
combination of anti-entropy and epidemic protocols to bring the system to an
eventually consistent state.

Problem: there are various ways to replicate over the network. One way
is an all to all scheme that notifies all replicas of an update. This can incur
significant network congestion. Anti-entropy is also possible, where each DB is
compared to another, and the necessary changes are applied to bring them all
to a consistent state. This is expensive and requires a lot of bandwidth. A
heterogeneous topology can complicate matters because some replicas may not be
reached when there is an update to propagate.

Contributions: this system considers multiple existing protocols
that aim to be eventually consistent. Since a major bottleneck in their
system is the network, they optimize for minimizing network congestion.
They show how having a knowledge of the network topology can be used to
decide what parameters to use in the epidemic algorithms. Death
certificates are used here, but there are still some unanswered
questions about them, like exactly how long they should be kept for, or
who should keep them. Many of the algorithms they propose, including the
death certificates, are timing sensitive. This seems hazardous,
considering the distributed nature of the system, and thus the potential
for unsynchronized clocks.

Applicability: the authors have not only tried to address the
problem, but have also done a good job at defining what the
complications of replication are, which is invaluable to future
implementors. They take the existing notion of death certificates and
show how they can be used in this system to enforce consistency with
data deletion. As the authors admit at the end, this is just the start,
and there is still a lot of work to do. A lot of the algorithms they
propose take a lot of tweaking to get it to get them to align with the
requirements for consistency and networking constraints.

Summary:
This paper describes several randomized algorithms for distributing updates among multiple data servers and some quantitative analyses on these algorithms as well.

Problems:
In the scenario that a database replicated at many sites, all the replicas needs to be consistent in some manners. The direct mail way would cause a terrible performance problem that create too many network traffics beyond the capacity of network. Another consideration is fault-tolerant, like if update packets loses somehow, how to ensure the databases are still eventually consistent. Besides these, some other possible problems might occur during the update propagation and the item deletion.

Contributions:
The author give a basic model of syncing multiple database and two randomized distribution update algorithms are proposed under this model: anti-entropy and rumor mongering.
Deletion might be resurrected by anti-entropy and rumor mongering, so one solution is proposed : using death certificates to ensure the delete operation would be propagated to all servers, thus not resurrected by the underlying consistency mechanism. The death certificates is not scalable and has high time and spatial costs as well as much network traffics. To figure out this problem, a revised version called dormant death certificates is proposed to reduce the number of servers who own the certificates.
When using anti-entropy with dormant death certificates, a update with the timestamp between the original one and the revised one, would lose. So, a second timestamp is added to control when the death certificates would be deleted.
When using rumor mongering with dormant death certificates, the same problem and the same solutions are adopted.
A spatial distribution model on network is introduced to help choose the nearby servers and analyze the performance of anti-entropy and rumors.

Applicability:
The idea of eventual consistency is widely used to many distributed system like GFS. According to CAP principle, consistency, availability and tolerance of network partition cannot be achieved simultaneously. So a loose consistency model is acceptable. Randomized algorithm like anti-entropy and rumor mongering can reduce the network traffic efficiently, so it is available to sync different database at different regions.

Limitation:
Most solutions mentioned in the paper are based on the assumption that all the servers are time-synced. However, time-skew is possible in distributed system. If so, most solutions would not work.

Summary:
This paper presents several strategies for spreading updates in database replicated at many sites, including direct mail, anti-entropy, complex epidemic and their combination. Death certificate is used to deal with deletion. Spatial distributions in the choice of partners is also covered.

Problem:
When a database is replicated at many sites, update on a single site should eventually be propagated to all the other sites to maintain consistency. The relaxed form of consistency can improve performance besides maintaining consistency.

Contributions:
This paper gives introduction to several strategies with their analysis, the method to deal with deletion and spatial distribution.
(1) Direct mail. A site with update will attempt to notify all other sites. PostMail operation is nearly but not completely reliable. It can fail when queues are overflow, destinations are inaccessible for a long time, the updated site has no accurate knowledge of the set of sites or introduce an O(n) bottleneck at the original site.
(2) Anti-entropy. The correct database value will be mailed to all sites when two anti-entropy participants have disagreed. This method consumes too much network capacity, but the update will eventually be distributed through network. This paper uses checksum to compare content of database. A variation scheme called peel back is to maintain an inverted index of its database by timestamp and exchange updates in reverse timestamp order.
(3) Rumor mongering (One kind of complex epidemic). An active site will notice other sites at random and share the update. Each site receive the update will also become active. When an unnecessary notice is made, with 1/k probability the active site becomes removed. Finally the system will converge. This method has the probability to fail.
(4) Anti-entropy can be used infrequently to back up a complex epidemic. It ensures that every update will eventually reaches every site with low network traffic.
(5) Deletion with death certificate. When old copied of deleted items meet timestamped death certificate, the old items are removed.
(6) Consider spatial distribution when choosing nodes to propagate instead of uniform distribution.

Applicability:
Eventual consistency is a good balance between consistency and performance. The several strategies proposed in this paper gives pioneering thinking in this problem. The combination of anti-entropy and complex epidemic is also a simple and smart idea. It may have application in industry.

Summary: In this paper they introduced replicated database update propagation as epidemic algorithms, and by using both analytical models and simulation they managed to significantly decrease the traffic of their replicated system.

Problem: Mutual consistency of database replicas is a difficult problem when updates can happen at any location. In addition, standard methods that try and propagate all updates to all replicas at one time result in network congestion. Even the anti-entropy method, which randomly sends out updates to other servers, resulted in too much network traffic. Finally, the updates need to propagate in some reasonable amount of time.

Contributions: The first part of the paper discussed direct mail, anti-entropy, and more complex protocols in the context of epidemic algorithms. They provide notation to understand the protocols, and provide some basic analysis.

Direct mail is shown to be not completely reliable, though it tries to update every known server. Anti-entropy is completely reliable, sending the update to every node, eventually. For anti-entropy, some improvements are given. One is a checksum which is sent instead of comparing the entire database; if these match, then there are no updates that need to be given.

More complex epidemics - such as rumor spreading - have their own probability of failure that is actually a tunable parameter, with certain tradeoffs. They analytically develop a model for tuning this parameter, and describe three important criteria for judging an epidemic algorithm: residue, traffic, and delay. They then describe many variations and improvements on the rumor spreading algorithm.

Death certificates are another contribution. These become a necessity when data is deleted, since it is both unknown if the delete has reached all servers (in the direct mail case) and possible that a delete could be undone if a previous update presents itself to the server. Thus the delete must be spread like any other update, and kept in memory for some time; but too long, and death certificates will outgrow the system memory.

Finally, the paper describes how the algorithms are affected by different spacial distributions. They find that high traffic on critical links makes uniform anti-entropy too costly. They found that algorithms that adapt to the number of neighbors for a given number of hops rather than the number of hops themselves are more effective. They found that push-pull rumor mongering is improved by the use of nonuniform spatial distributions.

General Comments: One important idea behind this paper is to look at other areas and see how they relate to your field. This is, of course, more general than just distributed systems. Another is that it can be useful, and indeed important, to derive analytical models for algorithms. These can be used to tune the algorithm for whatever suits the purpose. In terms of distributed systems, this paper describes an important point for any distributed system - consistency. Much as we saw in Memcache, it is not required that all parts be totally up-to-date, but eventually consistent. In addition, they have the same problem of network bandwidth being the bottleneck. It seems this is also a consistent point in distributed systems work, where the number of connections and messages becomes too much for the network to bear. Thus the work is required to reduce the load.

Flaws: I am a little critical of the randomized method of rumor spreading, since it is not guaranteed to reach every replica.

Summary:
The paper describes randomized, epidemic based algorithms for distributing updates to replicas for eventual consistency.

Problem:
Existing algorithms for the Clearinghouse servers of Xerox suffer from inconsistent replicas and induce a high amount of traffic in the network.

Contributions:
- Algorithms that do not depend on guarantees from the underlying communication network, like previous work in this area.
- The Anti-entropy algorithm which runs in the background to automatically recover from failures in direct mail. It provides the eventual consistency guarantee over time.
- Rumor spreading algorithm to rapidly disseminate changes among nodes while dying down eventually.
- Implementation optimizations for various algorithms.
- Death certificate mechanism to deal with potential resurrections of deletions.
- Dormant death certificate mechanism to deal with storage problems with certificates.
- Demonstrating how another domain's analysis can be used in Computer Science.
- Comparative analysis of various approaches to demonstrate scenarios in which algorithms perform well or worse.

Limitations:
- All algorithms seem to have parameters that need to be tuned depending on the network.
- Single malicious/faulty server can use death certificates to destroy all copies of a key in the system.

Applications:
- Good algorithms for eventually consistent systems.
- Randomized algorithms presented in the paper can be used effectively for many distributed systems to produce a certain level of confidence in replication.
- Spatial distribution analysis can be used as groundwork for more efficient distribution mechanisms built on top of it.

Summary:
This paper talks about methods that aid in pushing updates efficiently across a replicated distributed database system under the circumstances of slowly changing network and many replica sites. The approach is also expected to scale well as the number of replicas increases.

Problem:
Propagating updates in a replicated database system efficiently. Existing approaches like primary site update algorithms rely on the single site that received the update to distribute the updates. This causes : 1) Bottle neck on the single server that was updated 2) single point of failure to lose the updates 3) Centralized control.

Direct mail approach suffers from reliability issues since the updates were sent just once and there was no way of recovering if the updates were lost. It also suffered from the disadvantage that the primary site need not always be aware of the other candidate sites. Anti Entropy is slower but more reliable than direct mail.

Contributions:
1.This was one of the earlier papers to implement variety of replication strategies with eventual consistency goal. The idea of linking epidemiology concepts was intuitive.
2.Relaxed consistency model aided in scalability , as the updates need not be spread immediately from one primary site to all secondary sites.
3.Randomized choice of sites in anti-entropy and rumor-mongering approach also removes the dependency between the size of the network and the time it takes for everyone to receive the updates.
4.Push and Pull mechanism under different scenarios. Pushing performs better in a system with lesser updates since no site polls another site for latest data.
5.Optimizing anti - entropy by using latest update lists and checksums to minimize the amount of data that is exchanged between the sites.
6.Minimization technique which ensures that the counter value that controls a site from entering “removed” phase is incremented slowly.
7.Anti-Entropy supplementing Rumour mongering inorder to prevent lost updates.
8.Death certificate mechanism to prevent resurrection of deleted data.
9.Spatial distribution to not overload the critical links.

Limitations:
There was not a global clock to synchronize the sites.The system uses Greenwich mean time.

Applicability:
Systems that are okay to have a relaxed consistency model.Ex : Swift Object store

Summary
This paper discusses the problem of maintaining consistency across all replicas of a distributed database/store. It introduces the concept of epidemic algorithms which can drive the system to be eventually consistent.

Problem
A distributed store is almost always replicated for better locality, fault tolerace, reliability and availability. The challenge is to maintain data consistency across all these replicas. When there is a write to a data item, the server which receives the write has the responsibility of spreading this new updated value to all other replicas in the system. The obvious way is to do Direct Mailing - The write site sends information about the new update to all servers. Problems: 1. Too many messages to send. 2. Write site should have global knowledge of the servers where servers are newly coming in and going down.

This paper tackles the problem of consistent replication with randomized algorithms that are similar to spreading of epidemic diseases.

Contributions
The two major techniques described(not sure if they were first introduced elsewhere) are:

a. Anti-Entropy - Every server periodically syncs up with some randomly selected server to reconcile updates. The mechanism can be pull, push or push-pull. It is shown that pull can achieve good spread of the update in a shorter amount of time than push.

b. Rumor mongering - An infected server picks random servers and informs about the update. In turn, the newly infected servers again propagate the news. After sometime, they decide to stop rumoring.

Some other contributions:
0. One of the earliest implementations of an eventually consistent system.
1. Peal-back, incremental checksums and recent update list with anti-entropy technique were neat.
2. Deletion, death certificates and dormant death certificates were handled thoroughly.
3. Considering spatial distribution while choosing the nodes to propagate seemed reasonable and neat rather than uniform distribution.

Discussion
This paper seems to be the precursor of many gossip protocol variations introduced later. This system also seems to be one of earliest works implementing an eventually consistent system. The decisions made look mathematically sound. A distributed delete operation is always hard to deal with. This paper recognizes this and introduces death certificates. Cassandra tombstones are very similar to this. I felt that they over-designed (dormant certificates and activation timestamp) to meet very corner scenarios regarding death certificate deletion.

Relevance
The concepts discussed in this paper are highly relevant today. Amazon Dynamo uses anti-entropy with merkle trees to drive replicas to consistency. BitTorrent and other P2P systems use similar gossip protocols to share information about files. I believe this work created a big impact and spinned up a new area in distributed systems - gossip based protocols. It feels to me like Botnets have taken lot of inspiration in their scanning techniques from the ideas discussed in this paper.

Summary
In a replicated database maintaining mutual consistency is hard and this paper addresses this problem in gory details with several randomized solutions and their limitations. They adapted techniques that closely analogues to epidemics to spread synchronization information among different sites. Main goal is to minimize the traffic overhead and decrease the synchronization time. They also analysed the outcome and cost (i.e. traffic and time) of their model solution in different topological networks.

Problem
Synchronizing databases which is replicated at many sites and all are modifiable is the main target problem of this paper. In our previous paper Facebook circumnavigated this problem to great extent by allowing only the "master" server to be writable. In naive approach like "Direct Mail" the cost of traffic to obtain eventual consistency will be roughly O(n^2). They tried to randomize the algorithm to improve the performance. Anti-entropy was the previous best work but it incurs lot of band width as well as unnecessary compares the databases even when there is no updates. They also addressed the effect of network topologies on the algorithm and tried to tune the algorithms for that.

Contributions

  • Adapting epidemic theory models to update mongering in a distributed system.
  • Definitely their approaches beat Direct Mail or Anti-Entropy approaches in performance (wrt traffic and time)
  • Complex epidemics: In rumor mongering removing infected nodes is a neat way of stopping the wave of update spreading traffic.
  • Including the effect of network topologies and analyzing them in details is useful. They were able to optimize the performance based on topology information greatly. Though they accept that for arbitrary dynamic network is a difficult parameter to deal with.
  • peel back: merging anti-entropy with rumor mongering to achieve 100% infection also do away with the need of inverted index by using double linked list of updates is cool idea.
  • Simplicity of the randomised algorithms in solving complex problems.

Applicatbility
Rumor mongering (Gossip) is still widely used in replicated database synchronizing services. Now a days we dont see much complete database replicas where all the sites are writable. But for updating network topology information similar approaches like rumor mongering can be used.

This paper discusses the problems that arises while trying to maintain consistency between replicated databases and tries to derive a solution where this consistency can be achieved with minimum amount of network traffic and in the least amount of time .
The main contributions of the paper is that it presents a detailed comparison of three algorithms and builds a solution from them
Direct Mail where an update is sent to all others sites . This is simple but the hard problem here is maintaining knowledge of all active sites
Anti Entropy is a simple epidemic algorithm where a site randomly chooses another site and compares its databases and resolves any difference with it . This method generates a lot of network traffic and might take a lot of time to converge . To improve performance sites can maintain recent update lists that can be exchanged and then checksums can be computed and databases can be compared if checksums disagree . The authors improve this idea by suggesting sites to exchange updates in reverse timestamp order and recompute checksum till an agreement is achieved ( which is memory inefficient since you need to hold another identical list in inverted order )
Rumor Mongering is a complex epidemic algorithm where a site picks another site and spreads the update until he loses interest ( because the receipt already knows or some other factor) . Here the authors prove from results that using parameters such as feedback(( sender loses interest only if receipt already knows the rumor), counter( sender loses interest after a number of unnecessary contacts) and pull-push ( both send counter values and site with smaller counter is updated) , the system is able to converge much faster and generate lesser traffic . But the drawback of this algorithm is that the update might not spread to the entire system .
To solve the above problem the author suggests using anti-entropy as a backup mechanism along with rumor mongering . The idea of peel back was improved - every site had a doubly linked list which holds new rumor updates in the beginning of the list , and these new updates are sent out to other sites . Database deletions were handled by using a Death Certificate , which and this idea was improved by having dormant death certificates be located in a few sites ( to save space ) which removes obsolete updates .
The main ideas of the paper such as anti entropy and rumor mongering are implemented as epidemic/gossip protocols in routing and peer to peer file sharing . However nowadays with networks becoming faster, communication becoming reliable and CPU and memory becoming cheaper , my guess is that Direct Mail might be the fastest and a low overhead( in terms of network traffic ) method to attain consistency

Summary: The biological science of epidemiology provides a theoretical model that can be applied to database replication. The deterministic direct mail multicast model is not efficient and consumes too much bandwidth. Peel-back (anti-entropy) and rumor mongering can be combined with death certificates to provide an efficient probabilistic-based epidemic replication mechanism that replicates data efficiently and creates an eventually consistent system using relatively less bandwidth.

Problem: The CIN Clearinghouse server network had grown to 3 levels of domain hierarchy with thousands of computers. This lead to performance problems created while trying to achieve consistency across a large, heterogenous and unreliable network. The disagreement among sites within the existing remailing step (that followed anti-entropy) caused email replication problems far beyond the capacity of the network and was starting to cause failures. The key goals were to reduce network load imposed by the server update process and make the system eventually consistent.

Contributions: Rumor Mongering + anti-entropy (with peel-back and a doubly linked list) can be combined to propagate replication with far less bandwidth than the direct-mail (multicast) model. The anti-entropy is just an extra precaution to guarantee consistency. Probabilistic based rumor mongering and death certificates are the biggest contributions, while their method of guaranteeing consistency by applying anti-entropy in a different way is also a contribution (since it had not been done this way before).

1-Replication by “Rumor Mongering” (complex epidemic): With this epidemic model machines have 3x states: 1-infective, 2-removed, 3-susceptible. A computer with an update becomes infective and passes this on via a push, pull, or push-pull mechanism randomly to another machine based on probabilistic bounds that favors nearer machines (based on a linear model). The sender maintains a list of infective updates and recipients attempt to insert the update to it's own database and add all new 'hot' updates to an infective list. The problem is how long should the updates stay 'hot'? The rumor spreading mechanism can use any of the following models to decide when the active individual will lose interest in sharing the rumor and shift to a 'removed' state. A-probability, B-blind vs feedback, and C-counter vs coin are possible options.

2-Consistency via “Anti-entropy” (simple epidemic) :
Anti-entropy is failure tolerant in the sense that it will eventually spread everywhere & unlike direct mail it wont fail to spread somewhere (due to a queue crashing, etc). Two states are possible: 1-infective, 2-susceptible (so there is no remove state). Starting with a single infected site, the entire population is infected in time proportional to the log of the population size. The problem is that anti-entropy is expensive; this is why it is only used as a backup mechanism to guarantee consistency (rumor mongering can't achieve absolute consistency because it's random).

3-Deletion replication via Death Certificates: Death certificates are held for a fixed amount of time and spread like ordinary data. Most machines remove these at T1 time, while just a few machines keep them until T2 time which allows several years of deletions to exist iranomdly at a few sparse locations n the system. A dormant death certificate can be activated (much like an anti-body) to wipe out a bad update which would otherwise replicate. An activation time stamp helps the system know it needs to be active, but will also prevent it from over-riding a newer update.

Applicability: Their observations became the seeds of what would later become the gossip protocol which is still used in modern distributed systems (e.g. the Apache Cassandra “tombstones” are strikingly similar to death certificates). While some useful observations were made regarding spatial distributions, some questions were left unanswered here, so I felt like this wasn't one of their best contributions. The end of the paper felt rushed and they even mentioned that they gave up on one of their tests because it failed to run overnight.

Summary
The paper looks to summarize a randomized approach to update propagation that will replace direct mail approach, a deterministic approach. The direct mail approach suffers from performance bottlenecks, has limited scalability, requires centralized knowledge of the entire replication system, and is not consistent due to lack of reliability of the underlying network. The paper describes Rumor Mongering, an approach to update propagation looks to address each of these concerns, and performs an analysis of various choices that are made within this approach and their effect on key parameters that judge epidemic ‘goodness’.

Problem
Direct mail suffers from performance bottlenecks where a single node that receives the update needs to propagate to all nodes that store copies. If the update is lost in transit, or if the recipient is down or overwhelmed, there will be an inconsistency that needs to be manually rectified. Enter anti-entropy, a replication scheme that would be reliable, but slow and hence cannot be used as the first choice, but only as a redundancy mechanism.

Contributions


  • Optimizations to the anti-entropy replication scheme - through checksums and update lists to make difference resolution more efficient.

  • Outlining factors that judge the ‘goodness’ of epidemic based replication schemes. And also the analysis on the effects of blind vs feedback based systems, and also the choice of randomization versus measurement in determining when to stop infection.
  • Another example of the authors outlining parameters that can be tuned is the choice of push vs pull under workloads that are update heavy and ones that have fewer updates - pull working better on a system with more updates and push in ones with fewer updates.

  • Clever use of death certificates to overcome the new problem introduced by the use of rumor mongering of deleted updates ‘respreading'. Also the choice of dormancy on a subset of sites and thereby limiting the storage requirements on this solution is pretty smart way of making the solution practical.

  • Practical considerations in the choice of limiting randomization within a spatial locality to minimize network overheads without significant increase in convergence times.

Flaws


  • Synchronized clocks are hard to achieve, and they seemed to glaze over the difficulty/criticality of this to the system.

  • Maybe I don’t understand this well, but I don’t think the system deals with concurrent updates happening on different replicas, this will be particularly hard with clock skew between the replicas.

Applicability
Systems that are OK with the eventual consistency model, which have a large number of replicas to propagate updates to, with limited bandwidth on offer, with a need for graceful scalability can make use of this method.

This paper cares about how to maintain mutual consistency among the different replicas of a database. It has been designed for the Clearnghouse systems, which is a distributed replica database system. The system already has direct mail feature, in which the server node receiving a update mails that change every other replica in the system. But this method depends on underlying communication to be reliable, hence can fail due to network clogging, buffer overflow, overload etc .,. Hence the update can be missed by other nodes in the system. They have come up with Anti-entropy, which runs in background to randomly select a replica node and compare the databases to find if it has updates, if so the change is propagated to the other one. Though this can eventually cause consistency but it takes a long of time. It has push, pull, and push-pull modes. Since this is used in addition over the direct mail, most of the server nodes would have received updates and the remaining server nodes can achieve consistency by pushing or pulling the updates from other node. In order to make the system more robust and faster, they come up with a complex epidemic concept called Rumor mongering, where the infected node spreads the rumor to a set of nodes and stops spreading if it meets certain number of infected people also. There can be different modes in this method like;
1. Feedback, where the recipient's state is sent back to the sender to know when to stop.
2. Blind, where the recipient's state is not considered for stopping.
3. Counter, where the sender stops spreading after a counter is reached.
4. Coin, where the probability of spreading decreases by 1/k every time an infected recipient is met.
To propagate the deletions, they issue a death certificate to make sure the deletion is valid. To solve the problem of the death certificate became invalidated and the inconsistent state propagating they have a Time in which death certificate is resident which will help resolving during anti-entropy method. But this will occupy a large space, hence certain sites are selected as dormant site which only will store the certificates, which is again distributed. To solve the problem when checksumming doesn't matching, they have an activation timestamp which will be the time from now the certificate will be valid, but all the equality is compared using the original deletion time stamp. They also talk about how saptial distribution reduces a lot of traffic and network failure in their systems.

Contributions:
1. One of the first systems which tried developing a eventually consistent system.
2. To make anti-entropy efficient, they have introduced incremental checksumming and update list to identify which data is updated instead of raw comparison.
3. Spatial distribution of server nodes is used in present day nodes.
4. This system relies on the system to propagate the update/delete instead of centralized point of contact.

Summary:

The paper presents analysis of techniques used for update propagation to achieve eventual consistency in a distributed, replicated database environment and proposes a randomized algorithm(using existing techniques) which uses non-uniform spatial distribution to ensure faster and complete propagation of updates. The choices used in their algorithm were substantiated by mathematical comparison between different choices(for example: push and pull).

Problem:

The most important problem in a large, replicated distributed database system is synchronization of database updates among the servers.

Following are the problems with the existing methods, when they are used individually:

  • Direct mail(server receiving update propagates the updates to all other servers): Limited view(server receiving update need not know all other servers), loss of update messages.
  • Anti-entropy(periodically entire DB is sync'ed): slow propagation of updates and unnecessary processing overhead.
  • Rumor mongering(site with the rumor spreads rumor randomly to other servers until many servers receive updates): does not guarantee update delivery to all servers.

Contributions:

  • Comparison of database update propagation to epidemiology was neat and helped in easier comprehension of the problem scenario.
  • Usage of database checksums in anti-entropy is an effective solution as it avoids the overhead of huge DB comparison effort.
  • pull method(for update exchange among servers) is shown to have better performance than the push method by using probability derivations.
  • Different methods for complex epidemics were discussed, of which Counter and feedback mechanism is found to be best.
  • A combination of rumor-mongering for regular updates along with a periodic anti-entropy synchronization is found to be the best algorithm.
  • Their usage of doubly linked-list for Peel Back method(where latest timestamp keyed database entries are used for comparisons before performing anti-entropy sync) was simple and efficient.
  • Deletion of entries were dealt by using Death Certificates which were used to avoid incorrect propagation of old data.
  • Dormant death certificates were used to deal with the problem of persistence of deleted data on a crashed server.
  • Non-uniform distribution is shown to be twice as effective as uniform distribution of updates.

Limitations:

  • Synchronized clocks(with say NTP) would have performed much better for gauging the difference in the database entries.
  • The usage of distance parameter alone would not be able to give a proper measure for spatial distribution, other factors like network link bandwidth, congestion are more relevant.

Applicability:

Eventual consistency is being used in many deployments of present day distributed systems. Also, the gossip protocol concepts presented in the paper are highly applicable to P2P systems(like BitTorrent), overlay networks, etc. In database domain, Amazon's dynamo uses anti-entropy methods to perform Key-Value replica synchronization. Overall, the algorithm presented in the paper is a simple combination of existing techniques which can be easily implemented to achieve consistency in any distributed environment.

Summary:
The authors introduce randomized algorithms to achieve eventual consistency in a distributed, replicated database, which they derive from the patterns in which epidemics spread.

Problem:
In a dynamically changing environment where new servers get added or removed frequently, it is impossible for all servers to have an updated list of all other servers. Thus, using deterministic algorithms to ensure consistency in the system can be unreliable. So, the paper introduces a stochastic set of algorithms to make a heterogeneous, distributed database eventually consistent and it also defines some parameters which need to be tuned to achieve this. This follows that the authors make the choice of relaxing the constraint to achieve a faster convergence to consistency.

Contributions:
(1) The most important contribution of the paper is the authors introducing two methods to achieve consistency,
(a) Anti-Entropy approach where sites periodically contact other sites and share their database with them
(b)Rumor Mongering where when a site encounters an update, it begins to gossip it to random sites until the rumor becomes old.
(2) The idea of having an inverted index of timestamps and checksums to avoid having to go through the entire database every time two sites compare each other can make the system much more efficient.
(3)The paper points out the difference between push and pull by showing that when there are very few systems which need to be updated, then pull can perform better.
(4) The authors do a good job in theoretically explaining why and how their system works and fails. They also discuss drawbacks of benefits of each approach which could be very useful to distributed systems engineers now.
(5) The paper then describes a hybrid approach of the above methods that make the system eventually consistent. They propose a very simple way of achieving this hybrid using a doubly linked list
(6) Another major contribution of the paper was how they handled deletions in their system, by using death certificates. These are timestamped and stay in the system until all the sites know about the deletion.
(7) The authors also introduce the concept of “dormant death certificates” to avoid obsolete data items from getting back to the system. This is a very neat idea!

Limitations :
(1) The system uses current time for the time stamp. I understand that GPS synchronized clocks are definitely not available at the time this system was built, but they could have still considered using logical clocks or vector clocks.
(2) The authors talk about spatial distributions to reduce network traffic on critical paths but I don’t think this would be applicable to systems these days as the topologies keep changing continuously.

Applicability:
This paper talks about making the system “eventually consistent” which is exactly what systems like Amazon’s dynamo and Facebook’s key value store aim at these days. So, this shows how relevant this paper is in this sense. The paper introduces algorithms which are very easy to implement and that makes them attractive to be used in any replicated, distributed database.

Post a comment