A measure of the latency for a replicate site can be used to limit risks for some transactions. A small lag time indicates that the primary and replicate data are nearly consistent. An extensive lag time indicates a greater potential difference between the primary and replicate data.
An application can use a measure of lag time to:
Limit risk by restricting the transactions clients can execute as the lag time increases. For example, the banking application described in the previous section could include the lag time in its approval formula. It might allow withdrawals of up to $1000 based on the balance in the local replicated table when the latency is less than a minute. If the lag time is more than a minute, however, the application would log in to the primary database to approve withdrawals of more than $500.
Provide clients with a “performance meter” for data replication. Clients can use an estimate of lag time as an advisory. For example, a decision-support user, noting that the lag time is high, might wait for the local data to catch up with the primary data, and for the lag time to decrease, before running an analysis based on replicate data.
See Chapter 4, “Performance Tuning” in the Replication Server Administration Guide Volume 2 for information on measuring latency.