A relational database is required to ensure a number of transaction properties, including atomicity, consistency, integrity and durability (known as the ACID properties). To ensure this, Adaptive Server adheres to the following rules; it:
Writes all operations to the transaction log.
Writes log records before data or index pages are modified.
Writes log pages to disk when the transaction’s commit is issued.
Notifies the client application of the successful commit only after Adaptive Server has received notification of a successful write to disk from the underlying operating system and IO sub-system.
set delayed_commit is a performance option suitable only for certain applications. It increases Adaptive Server’s performance for DML operations, (for example, insert, update, delete), but increases the risk of losing your data during a system failure. Performance gains depend on the application in use.
The types of applications that benefit from set delayed_commit typically include short transactions that are sent rapidly and serially to Adaptive Server. For example, a batch application that issues many insert statements, with each insert being a separate transaction.
You can enable delayed_commit for the session with the set command or the database with sp_dboption. The syntax for delayed_commit is:
set delayed_commit on | off | default
where on enables the delayed_commit option, off disables it, and default means the database-level setting takes effect.
The syntax for sp_dboption is:
sp_dboption database_name, 'delayed commit', [true | false]
where true enables delayed_commit at the database level, and false disables the delayed_commit option. Only the DBO can set this parameter.
After you enable set delayed_commit, the client application is notified of a successful commit before the corresponding log records are written to disk. This improves performance because all but the last log page is written to disk, alleviating contention on the last and active log page.
Consider the following before you enable set delayed_commit:
Issuing shudown with nowait can cause data durability issues unless you issue a checkpoint and it is completed before the server shuts down.
Enabling set delayed_commit for a session only affects that session. All other sessions’ transactions have all their properties enforced, including their ACID properties. This also means other sessions’ physical log writes will write the last log page and the log records corresponding to a session with set delayed_commit enabled.
set delayed_commit is redundant on temporary databases and does not provide a performance improvement.
Use set delayed_commit only after careful consideration of both your application and operational requirements and your environment. While the risk to data durability may be very low, the options for recovery may be time-consuming if your database is large and your tolerance for missing data is low.
These are the changes to logging behavior when delayed_commit is enabled.
When a session implicitly or explicitly commits a transaction:
The user log cache (ULC) is flushed to the transaction log in memory.
The task issues writes on all non-written log pages except the last (which contains the commit).
The task notifies the client application of a successful commit without waiting for the IO to complete.
This transaction’s “last log page” is written:
By another transaction when it is no longer the “last log page.”
By another, non-delayed transaction when it completes.
By a checkpoint or the housekeeper buffer wash mechanism.
By implicit checkpoints causes (for example, shutdown, dump database, dump tran, sp_dboption truncate log on chaeckpoint).
The task is ready to continue with the next transaction.
When set delayed_commit is enabled, Adaptive Server notifies the client application before the actual physical disk write is completed. Because of this, the application can perceive that the transaction is complete whether or not the physical disk write is successful. In the event of a system failure (disk errors, system crash, and so on), transactions that were not written to disk (transactions whose commit records were on the last log page) are not be present after recovery in spite of the application being notified they were committed.
Systems that require tight system interdependencies, such as through a messaging system used through RTDS, further complicate the consequences of using set delayed_commit.
There are two situations where applications can manage the risk:
The application maintains it’s own trace or log, and would, after a system failure, ensure that the database state corresponds to its own trace or log.
You can restore the database to the state it was in before the application was run. This assumes you took a complete database backup before a batch-job type application is run. In case of failure the database backup is loaded and the batch job is restarted
You can enable set delayed_commit for a database or for a session, with the session setting over-ruling the database setting. This means that a session that enables the option has delayed_commit enabled regardless of the database setting.