ivan_morozov5 Apr 2025 04:42

Running Laravel queues with Redis. Currently 4 worker processes via Supervisor. Jobs are mostly HTTP API calls to external services, some DB writes. Queue depth spikes to 500+ at peak.

How do you decide how many workers to run?

Replies (6)
alex_petrov5 Apr 2025 04:54

For IO-bound jobs (HTTP calls, DB writes), you can run far more workers than CPU cores because each worker is idle most of the time waiting for network. We run 20 workers on a 4-core machine for API call jobs without issues.

0
petr_sys5 Apr 2025 05:43

Watch queue depth lag, not just throughput. If depth grows faster than workers consume it during peak, add workers. Horizon gives you a good view of this with throughput per queue.

0
dmitry_kv5 Apr 2025 06:30

Be careful about external rate limits. If you triple the workers but your external API is rate limited to 100 req/min, you will just get 429s and failed jobs. Model the external constraint first.

0
vova5 Apr 2025 07:13

Memory per worker matters. Each Laravel queue worker loads the full app. With 20 workers that is 20 x 50MB = 1GB just for workers. Check actual RSS with htop.

0
ivan_morozov5 Apr 2025 08:50

Horizon supervisor configuration lets you set min and max processes and autoscale based on queue depth. Simpler than manually tuning a fixed number.

0
katedev5 Apr 2025 10:19

We run separate supervisor groups per queue. High-priority jobs get 10 workers, background sync gets 2. This prevents slow background jobs from starving urgent ones.

0