Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ddl: fix admin repair table will reload fail on the other node (#18285) #18323

Merged
merged 4 commits into from
Jul 10, 2020

Conversation

ti-srebot
Copy link
Contributor

cherry-pick #18285 to release-4.0


What problem does this PR solve?

Problem Summary:
Now in TiDB cluster, if config one node as repair-mode node, the other node could not reload the new schema info after the table has been repaired.

The reason is because
repair-node: information schema has filtered the repaired tables, so for itself, apply the create table logic is ok.
other-node: information schema has included the repaired tables, so for them, should apply the drop table first, then create new table.

Root cause:
sortedTablesBuckets will append the new repaired table into it, when find table with TableByID, then binary search will get the first old one rather the new one with same table ID (so should drop it first)

What is changed and how it works?

How it Works: drop old table first when apply repaired table in applydiff

Related changes

  • Need to cherry-pick to the release branch

Check List

Tests

  • Manual test (add detailed scripts or steps below)
1: tiup playgroumd --db 2 
2: kill the tidb node & subscribe the bin and start it again
3: restart one node with repair-mode
4: conduct the repair SQL on it
5: check whether the node could reload the repaired table 
6: check whether the other node could reload the repaired table 

Release note

  • ddl: fix admin repair table will reload fail on the other node

Signed-off-by: ti-srebot <ti-srebot@pingcap.com>
@ti-srebot
Copy link
Contributor Author

/run-all-tests

@@ -74,7 +74,17 @@ func (b *Builder) ApplyDiff(m *meta.Meta, diff *model.SchemaDiff) ([]int64, erro
// We try to reuse the old allocator, so the cached auto ID can be reused.
var allocs autoid.Allocators
if tableIDIsValid(oldTableID) {
<<<<<<< HEAD
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix conflicts

@zz-jason zz-jason modified the milestones: v4.0.2, v4.0.3 Jul 10, 2020
Copy link
Contributor

@djshow832 djshow832 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ti-srebot ti-srebot added the status/LGT1 Indicates that a PR has LGTM 1. label Jul 10, 2020
@ti-srebot
Copy link
Contributor Author

@djshow832,Thanks for your review.

Copy link
Contributor

@zimulala zimulala left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ti-srebot ti-srebot added status/LGT2 Indicates that a PR has LGTM 2. and removed status/LGT1 Indicates that a PR has LGTM 1. labels Jul 10, 2020
@ti-srebot
Copy link
Contributor Author

@zimulala,Thanks for your review.

@AilinKid
Copy link
Contributor

/run-all-tests

@AilinKid
Copy link
Contributor

/run-all-tests

@zimulala zimulala merged commit 05c65a9 into pingcap:release-4.0 Jul 10, 2020
@AilinKid AilinKid deleted the release-4.0-f31298f5bb55 branch July 10, 2020 06:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sig/sql-infra SIG: SQL Infra status/LGT2 Indicates that a PR has LGTM 2. type/bugfix This PR fixes a bug. type/4.0-cherry-pick
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants