Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Consumer Priority #987

Open
pardahlman opened this issue Apr 29, 2022 · 6 comments
Open

Support for Consumer Priority #987

pardahlman opened this issue Apr 29, 2022 · 6 comments

Comments

@pardahlman
Copy link

Consumer Priority is a useful feature in RabbitMQ that allows the consumer to assign a priority value to its message consumption. We want to leverage this feature in our development environment to force message handling to local handlers (e.g. running on developer's machine). The idea being that the developer only needs to starts the endpoint in question locally, then interact with the development environment and any messages that would be handled by the endpoint in question is routed to the local instance.

Unfortunately, it looks like the call to basic consume in MessagePump does not pass any args, nor is it possible to access or modify the call.

I think it would be a great addition to NServiceBus.RabbitMQ if it was possible to specify consumer priority. Any thought?

@mikeminutillo
Copy link
Member

Hi @pardahlman. Thanks for the suggestion. We will consider it for a future release.

@pardahlman
Copy link
Author

@mikeminutillo checking in to see if there is any updates on this issue? Is it on the roadmap, and if not: would an PR addressing it be appreciated?

@DavidBoike
Copy link
Member

Hi @pardahlman. It is not on the roadmap. I wouldn't suggest a PR as there's no guarantee we would ultimately merge it. It would be helpful to know a bit more about your motivations. It sounds like you are thinking of using the RabbitMQ feature only as a hack in your development environment, in lieu of each developer setting up a RabbitMQ broker locally. I do have to say I'm a little concerned about the prospect of a feature that would only be used in development (or at the very least, designing it that way) because either someone would abuse it in production, or it wouldn't be designed with a production-centered use case in mind in the first place.

@pardahlman
Copy link
Author

Thanks for getting back to me, @DavidBoike.

It would be helpful to know a bit more about your motivations. It sounds like you are thinking of using the RabbitMQ feature only as a hack in your development environment, in lieu of each developer setting up a RabbitMQ broker locally.

Our distributed system consists of multiple applications (I have no exact count, but it is likely 100-200 distinct applications). It is not feasible to run these locally, so developers have access to a full development environment (not on the local machine). HTTP calls are routed through a local proxy that detects if a local instance of the web server is started and if so routes traffic to it, otherwise fallsback on the instance in the development environment. This has been a successful strategy for developing features, debugging etc. for HTTP calls. For debugging NServiceBus interaction, developers have to manually stop instances running on the development environment in order to ensure that messages are routed to the local instance. This adds a bit of friction to the development process, as it is a couple of extra steps to stop the endpoint and then developers might forget to start it after they are done.

As it is not possible to run all relevant NServiceBus endpoints locally, it is not feasible to run the RabbitMQ broker locally either.

I do have to say I'm a little concerned about the prospect of a feature that would only be used in development (or at the very least, designing it that way) because either someone would abuse it in production, or it wouldn't be designed with a production-centered use case in mind in the first place.

Normally I agree with this type of argument, but I don't see how using a RabbitMQ feature like this can be abused. In fact, there might be some production scenarios where it is desirable to route traffic to consumers in a more controlled way. Perhaps when measuring maximum throughput, determining how many instances of endpoints that should be used, gracefully migrating hosting environments for endpoints.

@DavidBoike
Copy link
Member

After reviewing the docs, I don't think this would work like you think it would. Specifically this section:

When consumer priorities are in use, you can expect your highest priority consumers to receive all the messages until they become blocked, at which point lower priority consumers will start to receive some. It's important to understand that RabbitMQ will still prioritise delivering messages - it will not wait for a high priority blocked consumer to become unblocked if there is an active lower priority consumer ready.

So if you were developing against this locally, and your consumer became blocked from prefetching messages, messages would continue to be delivered to the other running endpoints in the remote development environment. It's not going to hold those messages until your consumer becomes unblocked. So at any time during development, you could observe what would look like message loss. I would think this could get really confusing and frustrating.

And then what happens if some other developer starts operating on the same endpoint, with the same (and/or higher) priority level, and then they start stealing your messages?

@pardahlman
Copy link
Author

Thanks for getting back to me and taking the time to look into this - its much appreciated!

So if you were developing against this locally, and your consumer became blocked from prefetching messages, messages would continue to be delivered to the other running endpoints in the remote development environment.

I realize that there are cases where messages will be routed to a lower priority handler. My understanding is that the consumer becomes blocked if the queue is longer than configured prefetch count on the channel that the consumer uses. Is this correct? In our development environment we use a prefetch count of 24, most messages are handled within milliseconds and the message rate for most of our endpoints is not that high enough for that to be a problem.

And then what happens if some other developer starts operating on the same endpoint, with the same (and/or higher) priority level, and then they start stealing your messages?

This is undesired, but solvable with communication. I don't think this problem necessarily needs to be solved by NServiceBus, given that the API exposed only communicates "set consumer priorities".

To summarize, I believe that this feature may be useful in some production scenarios and would definitely increase our productivity. I realize that it depends on multiple factors including development setup, number of development environments, .... I don't see the danger in exposing the option to configure the priority, except for a slightly larger API surface.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants