Queue worker process count: how to decide
Running Laravel queues with Redis. Currently 4 worker processes via Supervisor. Jobs are mostly HTTP API calls to external services, some DB writes. Queue depth spikes to 500+ at peak.
How do you decide how many workers to run?
For IO-bound jobs (HTTP calls, DB writes), you can run far more workers than CPU cores because each worker is idle most of the time waiting for network. We run 20 workers on a 4-core machine for API call jobs without issues.
Watch queue depth lag, not just throughput. If depth grows faster than workers consume it during peak, add workers. Horizon gives you a good view of this with throughput per queue.
Be careful about external rate limits. If you triple the workers but your external API is rate limited to 100 req/min, you will just get 429s and failed jobs. Model the external constraint first.
Memory per worker matters. Each Laravel queue worker loads the full app. With 20 workers that is 20 x 50MB = 1GB just for workers. Check actual RSS with htop.
Horizon supervisor configuration lets you set min and max processes and autoscale based on queue depth. Simpler than manually tuning a fixed number.
We run separate supervisor groups per queue. High-priority jobs get 10 workers, background sync gets 2. This prevents slow background jobs from starving urgent ones.