# Concurrency Control Flow
This diagram shows how the semaphore-based concurrency control works across multiple workers.
```mermaid
graph LR
subgraph "Database Queue"
Q[Pending Jobs
Priority Queue]
end
subgraph "Worker-1"
S1[Semaphore
6 slots]
J1[Job 1]
J2[Job 2]
J3[Job 3]
J4[Job 4]
J5[Job 5]
J6[Job 6]
end
subgraph "Worker-2"
S2[Semaphore
6 slots]
J7[Job 7]
J8[Job 8]
J9[Job 9]
J10[Job 10]
J11[Job 11]
J12[Job 12]
end
subgraph "Worker-3"
S3[Semaphore
6 slots]
J13[Job 13]
J14[Job 14]
J15[Job 15]
J16[Job 16]
J17[Job 17]
J18[Job 18]
end
Q -->|Claim 6 jobs| S1
Q -->|Claim 6 jobs| S2
Q -->|Claim 6 jobs| S3
S1 --> J1
S1 --> J2
S1 --> J3
S1 --> J4
S1 --> J5
S1 --> J6
S2 --> J7
S2 --> J8
S2 --> J9
S2 --> J10
S2 --> J11
S2 --> J12
S3 --> J13
S3 --> J14
S3 --> J15
S3 --> J16
S3 --> J17
S3 --> J18
style Q fill:#FF6B6B
style S1 fill:#50C878
style S2 fill:#50C878
style S3 fill:#50C878
```
## Concurrency Control Mechanisms
### 1. Database-Level (Advisory Locks)
- **PostgreSQL Advisory Locks**: Prevent multiple workers from claiming the same job
- Atomic job claiming using `pg_try_advisory_lock()`
- Ensures exactly-once job processing
### 2. Worker-Level (Semaphore)
- **SemaphoreSlim**: Limits concurrent backtests per worker
- Default: `Environment.ProcessorCount - 2` (e.g., 6 on 8-core machine)
- Prevents CPU saturation while leaving resources for Orleans messaging
### 3. Cluster-Level (Queue Priority)
- **Priority Queue**: Jobs ordered by priority, then creation time
- VIP users get higher priority
- Fair distribution across workers
## Capacity Calculation
- **Per Worker**: 6 concurrent backtests
- **3 Workers**: 18 concurrent backtests
- **Average Duration**: ~47 minutes per backtest
- **Throughput**: ~1,080 backtests/hour
- **1000 Users × 10 backtests**: ~9 hours to process full queue