Access the full text.
Sign up today, get DeepDyve free for 14 days.
Esther Pacitti, E. Simon, R. Melo (1998)
Improving data freshness in lazy master schemesProceedings. 18th International Conference on Distributed Computing Systems (Cat. No.98CB36183)
J. Gray, Pat Helland, P. O'Neil, D. Shasha (1996)
The dangers of replication and a solution
T. Griffin, R. Hull, B. Kumar, D. Lieuwen, G. Zhou (1997)
A Framework For Using Redundant Data to Optimize Read-Intensive Database Applications
(1996)
Oracle 7 server concepts, release 7.3
Richard Golding (1992)
Weak-consistency group communication and membership
W. Inmon (1996)
Building the data warehouse (2nd ed.)
Brad Adelberg, B. Kao, H. Garcia-Molina (1996)
Database Support for Efficiently Maintaining Derived Data
P. Bernstein, N. Goodman (1984)
An algorithm for concurrency control and recovery in replicated distributed databasesACM Trans. Database Syst., 9
(1996)
Sybase Replication Server: A Pratical Architecture for Distributing and Sharing Corporate Information
Esther Pacitti (1998)
Improving Data Freshness in Replicated Databases
Esther Pacitti, E. Simon (1998)
Update propagation strategies to improve data freshness in lazy master scheme
(1995)
Partitioned data objects in distriduted databases. Distributed and Parallel Databases
S. Sarin, C. Kaufman, J. Somers (1986)
Using History Information to Process Delayed Database Updates
(1994)
Data replication. Distributed Computing Monitor
D. Terry, M. Theimer, K. Petersen, A. Demers, M. Spreitzer, C. Hauser (1995)
Managing update conflicts in Bayou, a weakly connected replicated storage systemProceedings of the fifteenth ACM symposium on Operating systems principles
D. Gifford (1979)
Weighted voting for replicated data
D. Shasha (1997)
Lessons from Wall Street: case studies in configuration, tuning, and distribution
V. Hadzilacos, S. Toueg (1994)
A Modular Approach to Fault-Tolerant Broadcasts and Related Problems
Stephen Gardner (1998)
Building the data warehouseCommun. ACM, 41
Esther Pacitti, P. Valduriez (1998)
Replicated Databases: concepts, Architectures and TechniquesNetw. Inf. Syst. J., 1
Oracle delivers
R. Gallersdörfer, Matthias Nicola (1995)
Improving Performance in Replicated Databases through Relaxed Coherency
R. Goldring (1995)
Things every update replication customer should know (abstract), 24
Esther Pacitti, P. Minet, E. Simon (1999)
Maintaining Replica Consistency in Lazy Master Replicated Databases
Bo Kähler, O. Risnes (1987)
Extending Logging for Database Snapshot Refresh
Wanlei Zhou (1999)
Replication Techniques in Distributed SystemsScalable Comput. Pract. Exp., 2
Rob Golding (1995)
Things Every Update Replication Customer Should Know
J. Gray, A. Reuter (1992)
Transaction Processing: Concepts and Techniques
A. Sheth, M. Rusinkiewicz (1990)
Management of interdependent data: specifying dependency and consistency requirements[1990] Proceedings. Workshop on the Management of Replicated Data
M. Carey, M. Franklin, M. Livny, E. Shekita (1991)
Data caching tradeoffs in client-server DBMS architectures
R. Alonso, Daniel Barbará, H. Garcia-Molina (1990)
Data caching issues in an information retrieval systemACM Trans. Database Syst., 15
P. Chundi, D. Rosenkrantz, S. Ravi (1996)
Deferred updates and data placement in distributed databasesProceedings of the Twelfth International Conference on Data Engineering
Robert Thomas (1979)
A Majority consensus approach to concurrency control for multiple copy databasesACM Transactions on Database Systems (TODS), 4
(1994)
A discussion of relational database replication technology
George Schussel (1994)
Database replication: playing both ends against the middleware, 1
Many distributed database applications need to replicate data to improve data availability and query response time. The two-phase commit protocol guarantees mutual consistency of replicated data but does not provide good performance. Lazy replication has been used as an alternative solution in several types of applications such as on-line financial transactions and telecommunication systems. In this case, mutual consistency is relaxed and the concept of freshness is used to measure the deviation between replica copies. In this paper, we propose two update propagation strategies that improve freshness. Both of them use immediate propagation: updates to a primary copy are propagated towards a slave node as soon as they are detected at the master node without waiting for the commitment of the update transaction. Our performance study shows that our strategies can improve data freshness by up to five times compared with the deferred approach.
The VLDB Journal – Springer Journals
Published: Feb 1, 2000
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.