Optimistic vs Pessimistic Locking

In this post, we will explore the differences between optimistic and pessimistic locking and the respective use-cases.

Before we dive deep into the details of the two lock types, let’s first try to understand the need for locking and how it is actually implemented.

As we know every time there are more tasks(threads) spawned by a process than the available number of CPU cores, the operating system mimics parallelism by concurrency management. The same is achieved via context switching. In other words, the operating system creates an illusion of parallel execution of these threads by rapidly allocating underlying resources (CPU core(s) in this case) in addition to making sure that the tasks do not conflict with each other.

And the benefit is more responsive applications and faster completions in some cases.

But consider the scenario where the said threads are trying to update a shared resource. Let’s say we have an int variable with an initial value set to 5. There are two threads, both trying to increment its value:

  1. Thread1 reads the value and its 5.
  2. Thread2 reads the value and its 5.
  3. Thread1 updates the value and it’s now 6.
  4. But as Thread2 has already read the value, it will also update it to be 6.

Now ideally, we should have the resultant value to be 7 (incremented twice) but due to the overlapping of the read and update steps (known as race condition), it’s 6 only.

The root cause of the above-mentioned problem is in the fact that there are two steps involved for the actual update:

  • Reading the original value.
  • Incrementing the same.

If the above-mentioned two operations are not performed together (as one single step) by the running threads, the end result will always have the potential to be erroneous.

If we can provide a way that guarantees that the two steps will always be performed together, commonly known as an atomic operation, we can maintain the data integrity as only a single thread will be performing this group of operations at any given point in time.

Re-writing the same example (making sure that read and write operations can be performed by the same thread at any given point of time):

  1. Thread1 reads the value and its 5.
  2. Thread1 updates the value and it’s now 6.
  3. Thread2 reads the value and its 6.
  4. Thread2 updates the value and it’s now 7.

The above-mentioned approach forms the basis of concurrency management by the way of locks. We can consider a lock as permission associated with any resource (method/code block etc. - a group of reading and update statements in this example) that allows access to the underlying resource only to the thread that holds that lock.

And as the lock can only be acquired by one thread at a time, we can ensure mutual exclusion between the threads competing for that resource.

Pessimistic Locking

Pessimistic locking (generally implemented using synchronized keyword in java) is a way of achieving mutual exclusion by always locking the entire scope. The first thread to acquire the lock will retain the lock until the scope execution. It will then release the lock which can then be acquired by any other waiting thread.

The problem with this approach lies in the fact that we always assume that all possible threads will be competing for the underlying resource at the same time which is generally not the case.

Due to this assumption, the additional overhead of acquiring and releasing the lock comes into the picture even if there is a single thread that is repeatedly executing the synchronized block.

Optimistic Locking

Optimistic locking is a way of managing concurrency control, generally used by transactional systems. In this implementation, we allow multiple threads to compete for the update completion but only committing the transaction if it’s not already updated.

The core idea behind optimistic locking is just the opposite of pessimistic locking. Instead of assuming that all the threads will be competing for the underlying resource at the same time, we take an optimistic approach and let the thread(s) run without any exclusive locking. But after the update, if the thread observes a change in the original value (must have been done by some other thread), we roll back or retry the update transaction.

This way, we only retry the failed transactions which minimizes the overhead of explicit locking observed in pessimistic locking. This implementation is mostly done using Compare and Swap algorithms where a copy of the original value or a hashcode/checksum of the original value is maintained by the threads.

After the update, the unique identifier (copy/hashcode/checksum) is again compared to the original one. Any change would indicate that the value is updated by some other thread and we need to retry the update which will require us to read the updated value to work on.

Java5 introduced many special classes to implement optimistic locking for common use-cases like AtomicInteger, AtomicLong, AtomicReference, etc. **-XX:+UseBiasedLocking** flag ( no longer supported post java 15) was also on similar lines.

Additionally, in the past, Biased Locking provided a way for the JVM to make optimization. In case the JVM detects that there is only a single thread accessing a synchronized block, the subsequent atomic operations on the object happen at no additional synchronization cost.

Comparison

  1. There is a clear benefit of using Optimistic Locking in terms of average completion time benchmark executed for increment operation on a synchronized method vs AtomicInteger. You can refer to this code for details and can run it locally.
  2. Being explicit, pessimistic locking is a clear indication to the end user about the author’s intention and is also less error prone.
  3. On the other hand, if implemented poorly, the application can end up being in a deadlock state.

Be notified of new posts. Subscribe to the RSS feed.