Isolation Levels — part VII: Repeatable Read

Franck Pachot
3 min readDec 5, 2023

The Repeatable Read isolation level allows changes on what you have scanned, except for rows returned after all filtering conditions. This means that it will not be allowed if another transaction tries to update or delete the rows that you fetched. For instance, let’s say you are browsing a list of books available in a library for a specific category. You may select a book from this list and want to be sure that no book disappears from it during your work with it. However, you don’t mind if new books are added to the category and may appear when you refresh the list because ignoring them does not change your logic.

To guarantee Repeatable Reads on databases without MVCC (Multi-Version Concurrency Control), the rows that are read are locked until the end of the transaction to prevent any changes. However, in MVCC databases, there is no need for such locking as they use a lock-free Snapshot Isolation technique. This technique consistently reads from the same read point, ensuring that the same data is read every time, resulting in repeatable reads. The Repeatable Reads in MVCC databases were described in the previous post about Snapshot Isolation. In Snapshot Isolation, the read time of scanned rows is the start of the transaction and additional conflict detection checks the consistency with the write time, which is the end of the transaction. In Repeatable Reads, by locking the rows that are fetched, their read time is the same as at the end of the transaction.

I distinguished scanned rows from fetched rows. It is important to note that despite its name, Repeatable Reads in non-MVCC databases may encounter phantom reads, according to the SQL definition. This happens when the state you read is altered by another transaction that inserts and commits a row that matches your criteria. With the book example above, if a new book arrives in the same category as the one you are reading, it will be considered a phantom read because it may appear if you re-run the query. This can affect you if your decision is not only based on the rows you fetched but also on the fact that no other rows were present. For example, an update that reserves a book from the list shows no anomaly in Repeatable Read. However, a transaction that relies on the number of books in the category will commit wrong information.

Preventing the phantom read anomaly can be a challenge in non-MVCC databases. This is because you need to lock the existing rows and all potential insertions that match your read predicates. Achieving this requires complex predicate locking, table or index ranges, or even full table locking. Due to this complexity, the SQL standard did not define an intermediate level between Serializable and Repeatable Read. This is because its definition was based on locking implementation and is no longer relevant for modern databases. However, with MVCC databases, setting the transaction isolation to Repeatable Read actually runs it in Snapshot Isolation. This protects from phantom reads, read skew, and lost updates.

The Repeatable Read isolation level in YugabyteDB is the same as Snapshot Isolation. In fact, set transaction level repeatable read;uses Snapshot Isolation. This is also true in PostgreSQL.

Originally published at https://dev.to on December 5, 2023.

--

--

Franck Pachot

Developer Advocate for YugabyteDB (Open-Source, PostgreSQL-compatible Distributed SQL Database. Oracle Certified Master and AWS Data Hero.