Get your application ready for enterprise success with Laravel Queues — from the basics to Horizon

Hi, I’m Valerio a software engineer and Laravel-certified developer based in Naples, Italy.

This guide is for all PHP developers that owns an application that is serving real users but they need a deeper understanding of how introduce or improve scalability in their system using Laravel queues.

The first time I read about Laravel was in late 2013 at the beginning of version 5.x of the framework. I wasn’t yet a developer involved in significant projects, and one of the aspects of modern frameworks, especially in Laravel, that sounded the most mysterious to me was “Queues”.

Reading the documentation, I guessed at the potential, but without real development experience it stayed just a theory in my mind.

Today I’m the creator of Inspector a real time performance dashboard that executes thousands of jobs every hour, so my knowledge of this architecture is much better than in the past.

Looking for a lavarel queue manager? In this article I’m going to show you how I discovered queues and jobs, and what configurations helped me to treat a large amount of data in real time while keeping server resources cost-friendly.

A gentle introduction

When a PHP application receives an incoming http request, our code is executed sequentially step by step until the request’s execution ends and a response is returned back to the client (e.g., the user’s browser).

That synchronous behavior is really intuitive, predictable, and simple to understand. I launch an http request to my endpoint, the application retrieves data from the database, converts it into an appropriate format, execute some additional task, and sends it back. It’s linear.

Queues and jobs introduce asynchronous behaviors that break this linear flow. That’s why I think these functions seemed a little strange to me at the beginning.

But sometimes a time-consuming task involves completing an execution cycle due to an incoming http request, e.g., sending an email notification to all team members of a project.

It could mean sending six or ten emails and it could take four or five seconds to complete. So every time a user clicks on that button, they need to wait five seconds before they can continue using the app.

The higher the number of users grows, the worse this problem gets.

What do you mean with “time consuming tasks”?

This is a legitimate question. Sending emails is the most common example used in articles that talk about queues, but I want to tell you what I needed to do in my real experience.

As product owner it’s really important for me to keep users’ journey information in sync with our marketing and customer support tools. So, based on user actions, we update user information to various external software via APIs (a.k.a. external http calls) for marketing and customer care purposes.

One of the most used endpoint in my application could sends 10 emails and execute 3 http call to external services to be completed. No user would wait all this time, much more likely they would stop using my application.

Thanks to queues, I can encapsulate all these tasks in dedicated classes, pass in the contructor the information needed to do their job, and schedule their execution for later in the background so my controller can return a response immediately.

class ProjectController 
    public function store(Request $request)
        $project = Project::create($request->all());
        // Defer NotifyMembers, TagUserActive, NotifyToProveSource 
        // passing the information needed to do their job
        Notification::queue(new NotifyMembers($project->owners));
        $this->dispatch(new TagUserAsActive($project->owners));
        $this->dispatch(new NotifyToProveSource($project->owners));
        return $project;

I don’t need to wait until all of these processes are completed before returning a response; rather, I’ll wait only for the time needed to publish them in the queue. This could mean the difference between 10 seconds and 10 milliseconds!!!

Laravel queues: how it works!

This is a classic “publisher/consumer” architecture. We’ve just published our jobs in the queue from the controller, so now we are going to understand how the queue is consumed, and finally jobs executed.

To consume a queue we need to run one of the most popular artisan command:

php artisan queue:work

As reported in the Laravel documentation:

Laravel includes a queue worker that will process new jobs as they are pushed onto the queue.

Great! Laravel provides a ready-to-use interface to put jobs in a queue and a ready-to-use command to pull jobs from the queue and execute their code in the background.

The role of Supervisor

This was another “strange thing” at the beginning. I think it’s normal discovering new things. Also I have experienced this phase of study so I write these articles to help me organize my skills and at the same time I can help other developers to expand their knowledge.

If a job fails wen firing an exception, the queue:work command will stop its work.

To keep the queue:work process running permanently (consuming your queues), you should use a process monitor such as Supervisor to ensure that the queue:work command does not stop running even if a job fires an exception.

The supervisor restarts the command after it goes down, starting again from the next job, abandoning the one that failed.

Jobs will be executed in the background on your server, no longer depending on an HTTP request. This introduces some changes that you need to consider when implementing the job’s code.

Here is the most important in my mind:

You don’t have the request

The http request is gone. Your code will be executed from cli.

If you need request parameters to accomplish your tasks you need to pass them in the job’s constructor to use later during execution:

class TagUserJob
    public $data;
    public function __construct(array $data)
        $this->data = $data;
// Put the job in the queue from your controller
$this->dispatch(new TagUserJob($request->all()));

You don’t know who the logged user is

The session is gone. In the same way, you won’t know the identity of the user who was logged in, so if you need the user information to accomplish the task, you need to pass the user object to the job’s constructor:

class TagUserJob
    public $user;
    public function __construct(User $user)
        $this->user= $user;
// Put the job in the queue from your controller
$this->dispatch(new TagUserJob($request->user()));

Laravel background jobs monitoring

Running in background you can’t see immediately if your job generate errors.

You will no longer have immediate feedback like from the result of an http request.

If the job fails he will do it silently, without anyone noticing. Consider integrating a monitoring tool to monitor job’s execution in real time and notify you if something goes wrong. That’s exactly what Inspector was desgined to do. It’s a complete monitoring system designed for Laravel based applications.

Understand how to scale

Unfortunately in many cases it isn’t enought. Using a single queue and consumer it may soon become useless.

Queues are FIFO buffers (First IFirst Out). If you schedule many jobs, also of different types, they need to wait for others to execute their scheduled tasks before being completed

There are two ways to scale:

Multiple consumers for a queue

In this way, five jobs will be pulled from the queue at a time, speeding up the queue’s consumption.

Single-purpose queues

You could also create specific queues for each job “type” you are launching, with a dedicated consumer for each queue.

In this way, each queue will be consumed independently without having to wait for the execution of the other types of jobs.

Laravel Jobs documentation

According to the Laravel documentation:

Queue workers are long-lived processes and store the booted application state in memory. As a result, they will not notice changes in your codebase after they have been started. So, during your deployment process, be sure to restart your queue workers

So remember that you need to restart the queue workers again after any code changes or deployment.

Towards Horizon

Laravel Horizon is a queue manager that gives you full control over how many queues you want to set up and the ability to organize consumers, allowing developers to put these two strategies together and implement one that fits your scalability needs.

It all starts by running php artisan horizon instead php artisan queue:work. This command scans your horizon.phpconfiguration file & starts a number of queue workers based on the configuration:

'production' => [
    'supervisor-1' => [
        'connection' => "redis",
        'queue' => ['adveritisement', 'logs', 'phones'],
        'processes' => 9,
        'tries' => 3,
        'balance' => 'simple', // could be simple, auto, or null

In the example above Horizon will start three queues with three processes assigned to consume each queue.

As mentioned in Laravel documentation Horizon’s code-driven approach allows my configuration to stay in source control where my team can collaborate. It’s a perfect solution also using a CI tool.

To learn the meaning of the configuration options in details consider to read this beautiful article: https://medium.com/@zechdc/laravel-horizon-number-of-workers-and-job-execution-order-21b9dbec72d7

My own configuration

'production' => [
    'supervisor-1' => [
        'connection' => 'redis',
        'queue' => ['default', 'ingest', 'notifications'],
        'balance' => 'auto',
        'processes' => 15,
        'tries' => 3,

Inspector uses mainly three queues:

  • ingest is for processes to analyze data from external applications;
  • notifications is used to schedule notifications immediately if an error is detected during data ingestion;
  • default is used for other tasks that I don’t want interfering with ingest and notifications processes.

Using balance=auto Horizon knows that the maximum number of processes to be activated is 15, which will be distributed dynamically according to the queues load.

If the queues are empty, Horizon keeps one process active for each queue, keeping a consumer ready to process the queue immediately if a job is scheduled.


Concurrent background execution could causes many other unpredictable bugs like MySQL “Lock wait timeout exceeded” and many others design issues. I hope this article has helped you to gain more confidence in using queues and jobs.

Share this article on your social account if you think it could be helpful for other.

Stop losing customers and money due to technical problems in your applications

More articles