Optimistic vs Pessimistic Locking in SQL

Optimistic vs Pessimistic Locking in SQL

Optimistic locking and pessimistic locking are two of the most common ways to prevent lost updates in SQL.

Concurrency bugs rarely look dramatic in code. They usually look like two perfectly reasonable requests that happened to overlap at the wrong time. If you want the higher-level backend framing before choosing a locking strategy, see How to Prevent Race Conditions in Backend Systems.

One request reads a row. Another request reads the same row a few milliseconds later. Both decide an update is safe. The later write silently overwrites the earlier one.

That is where locking decisions start to matter.

How Lost Updates Happen in SQL

Consider a simple inventory update:

SELECT quantity
FROM products
WHERE id = 42;

The application sees quantity = 1 and decides the item can still be reserved.

At nearly the same time, another request does the same read and reaches the same conclusion. Both requests then write back an updated value.

If the workflow was supposed to allow only one reservation, the system now has a correctness bug.

This is one of the most common forms of concurrency failure:

  • lost updates
  • double processing
  • oversold inventory
  • duplicate job claims
  • state transitions applied in the wrong order

Transactions help, but they do not automatically make these bugs disappear. That boundary is exactly why SQL Isolation Levels Explained matters: a transaction defines a unit of work, but the isolation level still determines which interleavings remain possible.

When application correctness depends on preventing conflicting updates, teams usually reach for one of two models:

  • optimistic locking
  • pessimistic locking

Both can work. They solve the same broad problem in different ways.


How Optimistic Locking Works

Optimistic locking assumes conflicts are possible but relatively uncommon.

Instead of blocking access up front, the application allows concurrent readers to proceed and checks at write time whether the underlying row changed since it was last read.

The usual implementation adds a version column to detect whether another transaction changed the row first:

ALTER TABLE orders
ADD COLUMN version INT NOT NULL DEFAULT 1;

A request reads the row along with its version:

SELECT id, status, version
FROM orders
WHERE id = 1001;

Suppose the row returns:

  • status = 'pending'
  • version = 7

Later, the application attempts the update only if the version is still 7:

UPDATE orders
SET status = 'confirmed',
    version = version + 1
WHERE id = 1001
  AND version = 7;

If this update affects one row, the write succeeded. If it affects zero rows, someone else modified the row first.

That failed update is not a database bug. It is the system correctly detecting a write conflict.

Why optimistic locking works

It turns a hidden concurrency bug into an explicit conflict the application can handle.

Instead of silently overwriting someone else's change, the write fails in a controlled way.

Typical application responses include:

  • retry with fresh data
  • return a conflict response such as 409
  • ask the user to refresh and reapply changes
  • re-run business logic against the latest state

When optimistic locking is a good fit

Optimistic locking usually works well when:

  • write conflicts are relatively rare
  • reads are much more common than writes
  • keeping latency low matters
  • short retries are acceptable
  • blocking other transactions would be more expensive than occasional conflict handling

Common examples:

  • editing profile data
  • updating business records in admin tools
  • moderate-contention order workflows
  • APIs where the client can retry safely

This pattern also fits well with systems that already treat duplicate or repeated requests carefully. If retries are part of the design, make sure they do not repeat side effects incorrectly. That is the same broader constraint behind Idempotency Keys for Duplicate API Requests.


How Pessimistic Locking Works with SELECT FOR UPDATE

Pessimistic locking assumes conflicts are likely enough that it is safer to prevent them up front.

Instead of detecting collisions after the fact, the transaction locks the row while the critical section is running so competing transactions must wait, fail, or skip the row.

In SQL, this often looks like SELECT ... FOR UPDATE:

BEGIN;

SELECT id, quantity
FROM products
WHERE id = 42
FOR UPDATE;

-- validate business rules
-- write updated state

UPDATE products
SET quantity = quantity - 1
WHERE id = 42;

COMMIT;

FOR UPDATE tells the database that this transaction intends to modify the selected row. Other transactions trying to lock or update the same row may be blocked until the current transaction commits or rolls back.

Why pessimistic locking works

It reduces the chance that two transactions both make decisions based on the same stale value.

That makes it useful when the cost of a conflict is high and the system cannot rely on clients or workers to resolve it cleanly afterward.

When pessimistic locking is a good fit

Pessimistic locking is often a better choice when:

  • write contention is expected
  • the operation protects a strict invariant
  • conflict retries would be expensive or unsafe
  • one actor must claim exclusive ownership of a row
  • the workflow is short and bounded enough that holding a lock is acceptable

Common examples:

  • decrementing scarce inventory
  • claiming background jobs
  • preventing duplicate financial settlement
  • serializing state transitions on one aggregate

This is especially common in job-processing systems. A worker may lock a job row before claiming it so another worker cannot take the same work at the same time. That is closely related to the concerns in Background Jobs in Production.


Optimistic vs Pessimistic Locking: The Real Trade-Off

The difference between optimistic and pessimistic locking is not just implementation style. It is a trade-off between two kinds of cost.

Optimistic locking pays in retries

With optimistic locking:

  • transactions stay lighter
  • readers do not block each other
  • throughput can stay high under low contention
  • conflicts show up later as failed writes

This is excellent when collisions are rare. It becomes painful when many requests repeatedly try to update the same row or small set of rows.

Under heavy contention, the system may spend more time re-reading and retrying than doing useful work.

Pessimistic locking pays in waiting

With pessimistic locking:

  • conflicts are prevented earlier
  • write decisions become more predictable
  • correctness can be easier to reason about
  • blocked transactions consume time and connection capacity

If the locked section is slow, contention spreads outward:

  • request latency increases
  • connection pools stay busy longer
  • deadlock risk grows
  • throughput falls even though correctness improved

This is one reason production performance work can become counterintuitive. The database may not be slow because one query plan is bad. It may be slow because concurrency now spends more time waiting on locks. That kind of mismatch often appears next to the same symptoms discussed in How to Find and Fix Slow SQL Queries in Production.


Optimistic vs Pessimistic Locking: Which Should You Use?

Use optimistic locking when

  • the same row is not updated frequently
  • the application can safely retry
  • user-facing edits should stay responsive
  • conflicts are acceptable as explicit outcomes

Use pessimistic locking when

  • the row is a hot spot
  • conflicts are frequent
  • correctness matters more than concurrent throughput
  • only one transaction may proceed at a time

Be careful with either approach when

  • transactions are long-lived
  • external network calls happen inside the transaction
  • retries trigger non-idempotent side effects
  • application code assumes transactions are stronger than they really are

That last point matters more than many teams expect. Locking is not a substitute for workflow design. If a request writes to the database and then separately emits an event, you may still have correctness gaps even if row-level locking is perfect. That is exactly why patterns like the Transactional Outbox Pattern in Microservices exist.


Example: Job Claiming

Suppose workers poll a table of pending jobs. Each worker wants to claim one job without duplicating work.

An optimistic approach might:

  1. read a pending job
  2. attempt an update with a version check
  3. retry if another worker changed it first

A pessimistic approach might:

BEGIN;

SELECT id
FROM jobs
WHERE status = 'pending'
ORDER BY created_at
FOR UPDATE SKIP LOCKED
LIMIT 1;

UPDATE jobs
SET status = 'running',
    started_at = NOW()
WHERE id = $selected_job_id;

COMMIT;

This pattern is often more practical for queues because it lets multiple workers claim different rows safely while avoiding direct collisions on the same job.

The important detail is not "locking is always better." It is that the queue has real contention, and the workflow benefits more from explicit row claiming than from retrying the same collision repeatedly.


Example: User Profile Edits

Now consider an admin dashboard where a record is edited occasionally.

Two users may open the same profile, but concurrent edits are uncommon. In that case, optimistic locking is often simpler and cheaper:

  • no row is locked during think time
  • the database does not wait on long-lived user actions
  • conflicts are rare enough to handle explicitly

This is exactly the kind of workflow where pessimistic locking usually creates more pain than value. Holding locks across slow application logic or user interaction is rarely a good trade.


Common Mistakes with SQL Locking

Treating transactions as automatic conflict protection

Many teams assume:

We already use transactions, so concurrent correctness is handled.

That is too broad.

Transactions define atomic boundaries. They do not automatically guarantee that two concurrent requests cannot both make a logically invalid decision.

Locking too much, too early

Pessimistic locking can be correct and still be operationally expensive.

If a transaction locks rows and then:

  • calls another service
  • performs slow application work
  • waits on user input
  • scans more data than necessary

the system will hold contention much longer than intended.

Retrying optimistic conflicts unsafely

Optimistic locking often leads to retries. Retries are fine only when the operation can be repeated safely.

If the write is tied to emails, external charges, broker publishes, or other side effects, the retry path must be designed intentionally. Otherwise, the conflict-handling logic can create a second class of bugs while fixing the first.

Ignoring deadlocks

Pessimistic locking reduces one kind of concurrency problem, but it can introduce another.

If two transactions lock resources in different orders, deadlocks become possible. That means locking strategy must include:

  • consistent row access order
  • short transactions
  • safe retry handling after deadlock aborts

How to Choose in Real Systems

A practical decision rule is:

Start by asking whether conflict is rare or expected.

If conflict is rare, optimistic locking is often the better default. It keeps concurrency higher and operational behavior simpler.

If conflict is common and the invariant is strict, pessimistic locking is often the safer tool.

Then ask a second question:

What happens after a conflict or wait?

That is where many designs fail. The database strategy may be correct, but the surrounding workflow still may not be:

  • can the request retry safely?
  • can the user recover from a conflict?
  • will blocked transactions exhaust connection capacity?
  • does the write trigger downstream side effects?

The right answer depends on both correctness and system behavior under load.

If you work with PostgreSQL or MySQL, the exact syntax and lock behavior differ in details, but the core decision stays the same: detect conflicts late with version checks, or prevent them early with row locks.


Quick Decision Guide

Choose optimistic locking if:

  • concurrent writes are uncommon
  • retries are cheap
  • the user or client can recover from a conflict
  • long lock waits would hurt throughput more than occasional write failures

Choose pessimistic locking if:

  • the same rows are updated frequently
  • one worker or request must win immediately
  • the protected invariant is strict
  • retry storms would be worse than temporary waiting

Closing Reflection

Optimistic and pessimistic locking are not competing buzzwords. They are two different ways to decide where concurrency cost should appear.

Optimistic locking pushes cost toward conflict detection and retry. Pessimistic locking pushes cost toward waiting and contention.

Neither is universally better. The right choice depends on how often conflicts happen, how strict the invariant is, and whether the surrounding system can safely absorb retries or blocking.

If you are debugging lost updates, duplicate job claims, or state transitions that behave differently under traffic, the locking model is not a small database detail. It is part of the application's correctness design.