|
The fourth reason listed for using transactions was repeatable reads. A repeatable read simply means that, for the life of the transaction, every time a request is made by any thread of control to read a data item, it will be unchanged from its previous value; that is, that the value will not change until the transaction commits or aborts.
Most applications do not need to enclose reads in transactions, and when possible, transactionally protected reads should be avoided, as they often cause performance problems. The problem is that a transactionally protected cursor, reading each key/data pair in a database, will acquire a read lock on most of the pages in the database and so will gradually block all write operations on the databases until the transaction commits or aborts. Note, however, that if there are update transactions present in the application, the read operations must still use locking, and should be prepared to repeat any operation (possibly closing and reopening a cursor) that fails with a return value of DB_LOCK_DEADLOCK. The exceptions to this rule are when the application is doing a read-modify-write operation and so requires atomicity, and when an application requires the ability to repeatedly access a data item knowing that it will not have changed.
Berkeley DB optionally supports reading uncommitted data; that is, read operations may request data which has been modified but not yet committed by another transaction. This is done by first specifying the DB_DIRTY_READ flag when opening the underlying database, and then specifying the DB_DIRTY_READ flag when beginning a transaction, opening a cursor, or performing a read operation. The advantage of using DB_DIRTY_READ is that read operations will not block when another transaction holds a write lock on the requested data; the disadvantage is that read operations may return data that will disappear should the transaction holding the write lock abort.