System Programming Models

How we express concurrency affects performance & productivity.

1. Thread per Task

Simple to program and understand, typical of older network servers:

  1. Listen for new connections.
  2. Accept a new connection.
  3. Create a thread to handle the connection.
  4. Do all blocking operations (read request, read/write disk, write network response).

However, with very high concurrency, this model can lead to resource exhaustion (too many threads) and context switching overhead.

2. Worker Pools

Instead, pre-allocate one of more thread pools, avoiding costs of on-demand thread creation. However, no clients are processed when workers are blocked, and still has context switching overhead if more threads than CPUs:

  1. Listen for new connections.
  2. Accept a new connection.
  3. Enqueue the connection to a worker pool.
  4. Dequeue a connection and do all blocking operations.
  5. Return to the pool and wait for the next connection.

3. Event Based

Event-based models are typical of IO heavy network servers. Events are used to concurrently handle multiple operations:

This has no thread scheduling & minimal memory usage (as a state machine is much smaller than a thread stack), but can be more complex to program and understand.

Back to Home