Fundamentals8 min

Synchronization and Contention: when threads fight for resources

Contention is one of the most subtle causes of performance problems. Understand how to identify and resolve resource disputes.

In concurrent systems, multiple threads or processes frequently need to access the same resources. When this happens uncontrollably, contention arises — a dispute that can devastate system performance.

This article explores what contention is, how it manifests, and strategies to minimize it.

Contention is the price of poorly managed concurrency.

What is Contention

Contention occurs when multiple processes or threads compete for a limited resource that can only be used by one at a time.

Examples of disputed resources

  • Locks/mutexes: only one thread can hold at a time
  • Database connections: limited pool
  • CPU cores: more threads than cores
  • Disk I/O: one operation at a time on the same file
  • Network: limited bandwidth

The cost of contention

When there's contention:

  1. Threads are waiting instead of working
  2. CPU spends time on context switching
  3. Throughput drops even with available resources
  4. Latency increases unpredictably

Types of Contention

Lock Contention

The most common in code.

synchronized (this) {
    // Only one thread at a time
    processRequest();
}

If processRequest() takes 10ms and 100 requests/second arrive:

  • 100 × 10ms = 1000ms of work per second
  • Only 1 thread can execute = severe bottleneck

CPU Contention

More active threads than available cores.

8 cores, 100 active threads
= Lots of context switching
= Significant overhead

I/O Contention

Multiple processes trying to read/write simultaneously.

Thread 1: write(file, data1)
Thread 2: write(file, data2)  // Waits for thread 1
Thread 3: write(file, data3)  // Waits for thread 2

Connection Contention

Exhausted connection pool.

Pool: 10 connections
Simultaneous requests: 50
= 40 requests waiting

Identifying Contention

Symptoms

  1. Low CPU, high latency: threads waiting, not working
  2. Throughput doesn't scale: more threads doesn't mean more work
  3. Erratic latency: depends on who arrived first
  4. Lock contention metrics: profiling tools show waiting

Diagnostic tools

Java:

  • jstack to see threads in BLOCKED state
  • JFR (Java Flight Recorder) for lock contention
  • VisualVM

Linux:

  • perf for CPU profiling
  • strace for I/O
  • /proc/[pid]/status for context switches

Metrics to monitor:

  • Threads in BLOCKED/WAITING state
  • Context switches per second
  • Average wait time per lock

Mitigation Strategies

1. Reduce lock scope

Before:

synchronized (this) {
    validateInput(data);
    processData(data);
    saveToDatabase(data);
    sendNotification(data);
}

After:

validateInput(data);  // No lock
Data processed = processData(data);  // No lock

synchronized (this) {
    saveToDatabase(processed);  // Only what's necessary
}

sendNotification(data);  // No lock

2. Use lock-free structures

// Instead of
synchronized (counter) {
    counter++;
}

// Use
AtomicInteger counter = new AtomicInteger();
counter.incrementAndGet();

3. Partition resources

Instead of one global lock, use locks per partition.

// Instead of
synchronized (cache) {
    cache.put(key, value);
}

// Use locks per segment
int segment = key.hashCode() % NUM_SEGMENTS;
synchronized (locks[segment]) {
    segments[segment].put(key, value);
}

ConcurrentHashMap does exactly this.

4. Increase pools

If connections are the bottleneck, increase the pool — but carefully:

  • Each connection consumes memory
  • Backend needs to support more connections

5. Use asynchronous operations

// Instead of waiting
CompletableFuture<Result> future = processAsync(data);
// Continue doing other things
future.thenAccept(result -> handleResult(result));

6. Batch operations

// Instead of
for (Item item : items) {
    synchronized (lock) {
        save(item);
    }
}

// Do batch
synchronized (lock) {
    saveAll(items);
}

Database Contention

Row-level locking

-- Transaction 1
UPDATE accounts SET balance = balance - 100 WHERE id = 1;

-- Transaction 2 (waits for transaction 1)
UPDATE accounts SET balance = balance + 100 WHERE id = 1;

Mitigations:

  • Short transactions
  • Order accesses consistently
  • Use appropriate isolation levels

Table-level locking

Some operations lock entire tables:

  • ALTER TABLE
  • LOCK TABLE
  • Indexes being created

Deadlocks

When two transactions wait for each other:

T1: lock(A), waits for lock(B)
T2: lock(B), waits for lock(A)
= Deadlock

Prevention:

  • Always acquire locks in the same order
  • Transaction timeouts
  • Detect and resolve automatically

Amdahl's Law and Contention

Amdahl's Law shows that the sequential part limits parallelization gains:

Maximum speedup = 1 / (S + (1-S)/N)

S = sequential fraction (contention)
N = number of processors

If 10% of code is sequential (contention):

  • With 10 cores: maximum speedup = 5.3x
  • With 100 cores: maximum speedup = 9.2x
  • With 1000 cores: maximum speedup = 9.9x

Conclusion: reducing contention has more impact than adding resources.

Best Practices

  1. Measure before optimizing: not all contention is a problem
  2. Minimize critical sections: do the minimum inside locks
  3. Prefer concurrent structures: ConcurrentHashMap, AtomicInteger
  4. Avoid nested locks: source of deadlocks
  5. Use timeouts: don't wait infinitely
  6. Monitor continuously: contention can emerge with scale

Conclusion

Contention is one of the most subtle and impactful bottlenecks in concurrent systems. It:

  • Limits scalability
  • Increases latency
  • Wastes resources

To combat it:

  1. Identify where it occurs (profiling)
  2. Reduce critical sections
  3. Partition resources when possible
  4. Use appropriate structures for concurrency

Parallelism without contention is speed. Parallelism with contention is waste.

contentionconcurrencylocksthreads

Want to understand your platform's limits?

Contact us for a performance assessment.

Contact Us