Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable the auto-consume logic in cdb2_close #3362

Merged
merged 1 commit into from
Aug 5, 2022

Conversation

riverszhang89
Copy link
Contributor

cdb2_close() attempts to drain unfetched rows in order to make the connection reusable for the next sockpool user. This logic however may undesirably consume events from a consumer. Disable it by default.

Fixes #3361.

cdb2_close() attempts to drain unfetched rows in order to make the connection reusable for
the next sockpool user. This logic however may undesirably consume events from a consumer.
Disable it by default.

Fixes bloomberg#3361.

Signed-off-by: Rivers Zhang <hzhang320@bloomberg.net>
Copy link

@roborivers roborivers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lint check:

@@ cdb2api/cdb2api.c @@
+ Please port the change to cdb2jdbc if necessary.
+ If you add any internal configuration state to this file, please update the reset_the_configuration() function as well to include it.

Copy link

@roborivers roborivers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Coding style check: Success ✓.
Smoke testing: Success ✓.
Cbuild submission: Success ✓.
Regression testing: 9/440 tests failed ⚠.

The first 10 failing tests are:
sc_downgrade [core dumped]
socksql_master_swings
sc_truncate
phys_rep
ignore_bad_table
weighted_standing_queue
load_cache_dumpmax_generated
load_cache_autocache_generated
cdb2_close
sc_downgrade

@akshatsikarwar
Copy link
Contributor

I was thinking it might be better to simply track if cdb2_run_statment has ever received a ping-pong request, and if so skip donating to sockpool (and so no need to drain connection). Don't want to disable it by default.

@akshatsikarwar
Copy link
Contributor

Also need to patch consume_previous_query in the API.

@akshatsikarwar
Copy link
Contributor

akshatsikarwar commented Aug 3, 2022

bb-cdb2api does not have auto-consume in cdb2_close, but does in cdb2_run_statement:

   else if (hndl->ack) {
       hndl->ack = 0;
       if (hndl->sb) {
           int fd = cdb2buf_fileno(hndl->sb);
           shutdown(fd, SHUT_RDWR);
       }
   }
   else {
       while(cdb2_next_record_int(hndl) == CDB2_OK);
   }

This is insufficient, as user might have called cdb2_clear_ack. I think we need to track if prior cdb2_run_statement was for a consumer or not (here and bb-cdb2api as well).

@riverszhang89
Copy link
Contributor Author

Yes you're right. We need to patch these places for a mixed use of 'exec procedure' on a consumer and regular queries.

@riverszhang89 riverszhang89 marked this pull request as draft August 3, 2022 15:25
@akshatsikarwar
Copy link
Contributor

akshatsikarwar commented Aug 4, 2022

Let us disable for now - we can make it robust later.

@akshatsikarwar akshatsikarwar marked this pull request as ready for review August 5, 2022 14:04
@akshatsikarwar akshatsikarwar merged commit ec13cc9 into bloomberg:master Aug 5, 2022
riverszhang89 added a commit to riverszhang89/comdb2 that referenced this pull request Aug 10, 2022
Auto-consume needs to be enabled for this test to pass (it was disabled in bloomberg#3362).

Signed-off-by: Rivers Zhang <hzhang320@bloomberg.net>
riverszhang89 added a commit that referenced this pull request Aug 10, 2022
Auto-consume needs to be enabled for this test to pass (it was disabled in #3362).

Signed-off-by: Rivers Zhang <hzhang320@bloomberg.net>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Potential data loss with lua triggers
3 participants