-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ddl: fix admin repair table will reload fail on the other node #18285
Conversation
Codecov Report
@@ Coverage Diff @@
## master #18285 +/- ##
================================================
+ Coverage 79.4536% 79.4659% +0.0122%
================================================
Files 535 535
Lines 144274 144282 +8
================================================
+ Hits 114631 114655 +24
+ Misses 20364 20358 -6
+ Partials 9279 9269 -10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@djshow832,Thanks for you review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@bb7133,Thanks for you review. |
/run-unit-test |
infoschema/builder.go
Outdated
@@ -75,7 +75,7 @@ func (b *Builder) ApplyDiff(m *meta.Meta, diff *model.SchemaDiff) ([]int64, erro | |||
var allocs autoid.Allocators | |||
if tableIDIsValid(oldTableID) { | |||
if oldTableID == newTableID && diff.Type != model.ActionRenameTable && | |||
diff.Type != model.ActionExchangeTablePartition { | |||
diff.Type != model.ActionExchangeTablePartition && diff.Type != model.ActionRepairTable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a comment here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good, addressed
Signed-off-by: AilinKid <314806019@qq.com>
/run-all-tests |
/run-all-tests |
Signed-off-by: AilinKid <314806019@qq.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@zimulala,Thanks for you review. |
Sorry @zimulala, you don't have permission to trigger auto merge event on this branch. The number of |
Sorry @zimulala, you don't have permission to trigger auto merge event on this branch. The number of |
/run-all-tests |
Signed-off-by: ti-srebot <ti-srebot@pingcap.com>
cherry pick to release-4.0 in PR #18323 |
What problem does this PR solve?
Problem Summary:
Now in TiDB cluster, if config one node as repair-mode node, the other node could not reload the new schema info after the table has been repaired.
The reason is because
repair-node: information schema has filtered the repaired tables, so for itself, apply the create table logic is ok.
other-node: information schema has included the repaired tables, so for them, should apply the drop table first, then create new table.
Root cause:
sortedTablesBuckets will append the new repaired table into it, when find table with
TableByID
, then binary search will get the first old one rather the new one with same table ID (so should drop it first)What is changed and how it works?
How it Works: drop old table first when apply repaired table in
applydiff
Related changes
Check List
Tests
Release note