Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Queue error: Connection is closed. #1231

Closed
Philluxx opened this issue May 5, 2022 · 17 comments · Fixed by #1251
Closed

Queue error: Connection is closed. #1231

Philluxx opened this issue May 5, 2022 · 17 comments · Fixed by #1251

Comments

@Philluxx
Copy link

Philluxx commented May 5, 2022

Hey there,

we we are adding jobs to a queue within an expressjs application. Therefore we are initializing the queue at startup.

Now we see almost regularly console log messages Connection is closed. If adding an Error-Listener this error is catched (stating the same error message). Nevertheless new jobs can be added to the queue successfully.

The same behaviour can be seen in a plain NodeJS application. This error does not appear for workers.

My question is: how to deal with this error. Adding an error listener with queue.on("error",...) and supressing log messages feels quite bad. Can anyone give me a hint to search for the cause? Is this a bug in BullMQ or an issue in IORedis?

Thank you so much and best regards!

import { Queue } from 'bullmq'

(async () => {
     const queue = new Queue("test-queue",
        {
            connection: {
                host: "host",
                port: 6380,
                password: "topsecret",
                tls: { servername: "host" },
            },
        });

    queue.on("error", (error: Error) => {
        // Suppressing it feels bad...
        // console.log(error.message);
    });
})();
@Philluxx Philluxx changed the title Queue error: Ccnnection is closed. Queue error: Connection is closed. May 5, 2022
@manast
Copy link
Contributor

manast commented May 6, 2022

Are you closing the queue somewhere? The error seems to indicate that you tried to perform some operation on a closed connection.

@Philluxx
Copy link
Author

Philluxx commented May 6, 2022

Hey, no the queue is not closed. I tested it now with different Redis servers. With a local redis in docker there are no error log messages. With an instance in Azure this will show the described behaviour.

What confuses me is that it still works - although the error message is thrown, I can still add new jobs to the queue.

If I don't want the log to be flooded with these messages, I have to suppress all error log messages, but then "real", relevant messages may no longer be displayed.

@manast
Copy link
Contributor

manast commented May 6, 2022

Is it possible that you can write a minimal piece of code that reproduces it? (we can try with DO / Redislabs instances)

@pruhstal
Copy link

pruhstal commented May 9, 2022

What Azure tier is it? How many connections are you opening? My hunch is that it's automatically closing connections during periods of no activity.

@bicrypt
Copy link

bicrypt commented May 10, 2022

same issue for me, local redis works, managed redis on digitalocean doesn't.

the console is spammed with these error messages since 1.80.4 as far as I can tell, since reverting to 1.80.3 resulted in 0 of these error messages thereafter and bumping the version up to 1.80.4 instantly caused these errors to reappear.

full stack trace for anything after 1.80.3:

Error: Connection is closed.
    at EventEmitter.RedisConnection.handleClientClose (/some/masked/path/node_modules/.pnpm/bullmq@1.80.4/node_modules/bullmq/dist/cjs/classes/redis-connection.js:54:32)
    at EventEmitter.emit (node:events:527:28)
    at EventEmitter.emit (node:domain:475:12)
    at processTicksAndRejections (node:internal/process/task_queues:78:11)

the commit/change that I assume causes this issue can be found here.

thanks for taking the time and adressing the issue.

@Philluxx
Copy link
Author

What Azure tier is it? How many connections are you opening? My hunch is that it's automatically closing connections during periods of no activity.

The behavior occurs with both the Standard C0 (256 connections) and Premium P1 (7500 connections) tiers. The Redis instance is exclusively used for BullMQ.

@Philluxx
Copy link
Author

Is it possible that you can write a minimal piece of code that reproduces it? (we can try with DO / Redislabs instances)

Hi manast, this already occurs with the minimal written code from my first post. When the NodejS application is started, the behavior occurs even if only one queue instance is created and no other code is present.

Tomorrow I will give some older versions a try as bicrypt mentioned.

@Philluxx
Copy link
Author

I can confirm this occures first with 1.80.4 but not with 1.80.3

@manast
Copy link
Contributor

manast commented May 12, 2022

I have tried both latest versions 1.82.0 and 1.80.4 with the code above (and also adding some jobs) and there is no connection error. I used a single DO instance 1G Ram. We are going to need more details in order to reproduce this issue.

@Philluxx
Copy link
Author

Philluxx commented May 12, 2022

Connection lost message are occuring after some time and then about every 10 minutes, in my case. Regardless of whether I put jobs in the queue. I have only seen this with the queue class. But not with Worker, QueueScheduler and QueueLogger.

However, there is no noticeable malfunction. Jobs are placed in the queue normally. Could it be that we are dealing with the display of an error that is in fact not an error and it is just an internal reconnect that is throwing this message?

@manast
Copy link
Contributor

manast commented May 12, 2022

@Philluxx internal reconnects do not generate errors AFAIK. But this error comes from ioredis so I am not 100% sure. I left my connection running for 1 hour and no error was ever generated...

@bicrypt
Copy link

bicrypt commented May 12, 2022

try using multiple Queues, Workers and QueueSchedulers @manast.

I am using two of each and as @Philluxx stated, the errors start after a while and then happen every couple of seconds. For me they usually start exactly 5 minutes after the queues/workers/schedulers have been initialized.

In case it helps, these are my redis connection settings I use for all connections:

autoResendUnfulfilledCommands: true,
autoResubscribe: true,
connectTimeout: 5000,
enableAutoPipelining: false,
enableOfflineQueue: true,
enableReadyCheck: false,
lazyConnect: false,
maxRetriesPerRequest: null,
showFriendlyErrorStack: true,

@redtogin
Copy link

We are seeing this error too, using Heroku-Redis. Noticed it a couple of weeks ago, and we haven't changed anything in our implementation.

A log snippet:

2022-05-16T14:14:48.039175+00:00 app[worker.1]:     at Redis.emit (node:events:394:28)
2022-05-16T14:14:48.039175+00:00 app[worker.1]:     at Redis.emit (node:domain:475:12)
2022-05-16T14:14:48.039175+00:00 app[worker.1]:     at processTicksAndRejections (node:internal/process/task_queues:78:11)
2022-05-16T14:14:48.138458+00:00 app[worker.1]: Error: Connection is closed.
2022-05-16T14:14:48.138460+00:00 app[worker.1]:     at RedisConnection.handleClientClose (/app/node_modules/bullmq/dist/cjs/classes/redis-connection.js:54:32)
2022-05-16T14:14:48.138460+00:00 app[worker.1]:     at Redis.emit (node:events:406:35)
2022-05-16T14:14:48.138460+00:00 app[worker.1]:     at Redis.emit (node:domain:475:12)
2022-05-16T14:14:48.138461+00:00 app[worker.1]:     at processTicksAndRejections (node:internal/process/task_queues:78:11)
2022-05-16T14:16:21.000000+00:00 app[heroku-redis]: source=REDIS addon=redis-reticulated-83337 sample#active-connections=12 sample#load-avg-1m=0.395 sample#load-avg-5m=0.245 sample#load-avg-15m=0.185 sample#read-iops=0 sample#write-iops=37.643 sample#memory-total=15618680kB sample#memory-free=10019304kB sample#memory-cached=3923728kB sample#memory-redis=16115072bytes sample#hit-rate=0.51211 sample#evicted-keys=0
2022-05-16T14:19:07.000000+00:00 app[heroku-redis]: source=REDIS addon=redis-reticulated-83337 sample#active-connections=12 sample#load-avg-1m=0.33 sample#load-avg-5m=0.225 sample#load-avg-15m=0.18 sample#read-iops=0 sample#write-iops=37.434 sample#memory-total=15618680kB sample#memory-free=10004524kB sample#memory-cached=3934120kB sample#memory-redis=16115088bytes sample#hit-rate=0.51211 sample#evicted-keys=0
2022-05-16T14:19:49.076886+00:00 app[worker.1]: Error: Connection is closed.
2022-05-16T14:19:49.076896+00:00 app[worker.1]:     at RedisConnection.handleClientClose (/app/node_modules/bullmq/dist/cjs/classes/redis-connection.js:54:32)
2022-05-16T14:19:49.076897+00:00 app[worker.1]:     at Redis.emit (node:events:406:35)
2022-05-16T14:19:49.076897+00:00 app[worker.1]:     at Redis.emit (node:domain:475:12)
2022-05-16T14:19:49.076898+00:00 app[worker.1]:     at processTicksAndRejections (node:internal/process/task_queues:78:11)
2022-05-16T14:19:49.077240+00:00 app[worker.1]: Error: Connection is closed.
2022-05-16T14:19:49.077241+00:00 app[worker.1]:     at RedisConnection.handleClientClose (/app/node_modules/bullmq/dist/cjs/classes/redis-connection.js:54:32)
2022-05-16T14:19:49.077241+00:00 app[worker.1]:     at Redis.emit (node:events:394:28)
2022-05-16T14:19:49.077241+00:00 app[worker.1]:     at Redis.emit (node:domain:475:12)
2022-05-16T14:19:49.077242+00:00 app[worker.1]:     at processTicksAndRejections (node:internal/process/task_queues:78:11)
2022-05-16T14:19:49.177073+00:00 app[worker.1]: Error: Connection is closed.
2022-05-16T14:19:49.177075+00:00 app[worker.1]:     at RedisConnection.handleClientClose (/app/node_modules/bullmq/dist/cjs/classes/redis-connection.js:54:32)
2022-05-16T14:19:49.177075+00:00 app[worker.1]:     at Redis.emit (node:events:406:35)
2022-05-16T14:19:49.177075+00:00 app[worker.1]:     at Redis.emit (node:domain:475:12)
2022-05-16T14:19:49.177076+00:00 app[worker.1]:     at processTicksAndRejections (node:internal/process/task_queues:78:11)
2022-05-16T14:19:49.177527+00:00 app[worker.1]: Error: Connection is closed.
2022-05-16T14:19:49.177527+00:00 app[worker.1]:     at RedisConnection.handleClientClose (/app/node_modules/bullmq/dist/cjs/classes/redis-connection.js:54:32)
2022-05-16T14:19:49.177528+00:00 app[worker.1]:     at Redis.emit (node:events:406:35)
2022-05-16T14:19:49.177528+00:00 app[worker.1]:     at Redis.emit (node:domain:475:12)
2022-05-16T14:19:49.177528+00:00 app[worker.1]:     at processTicksAndRejections (node:internal/process/task_queues:78:11)

@manast
Copy link
Contributor

manast commented May 19, 2022

@redtogin Are you listening to the "close" event and logging the output of it? The reason I am asking is because acording to your logs the error comes from the "handleClientClose" that looks like this:

    this.handleClientClose = (): void => {
      this.emit('close', new Error(CONNECTION_CLOSED_ERROR_MSG));
    };

@manast
Copy link
Contributor

manast commented May 19, 2022

Actually, I think you are really getting the disconnections, but we are probably wrongly converting them into errors.

@redtogin
Copy link

redtogin commented May 20, 2022

@manast I see you likely found the cause and fixed it, great! thanks 🙏

edit: pr is from @roggervalf so kudos there! 😉

github-actions bot pushed a commit that referenced this issue May 24, 2022
## [1.83.2](v1.83.1...v1.83.2) (2022-05-24)

### Bug Fixes

* **close:** emit ioredis:close event instead of error ([#1251](#1251)) fixes [#1231](#1231) ([74c1c38](74c1c38))
@github-actions
Copy link

🎉 This issue has been resolved in version 1.83.2 🎉

The release is available on:

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants