Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ddl: fix admin repair table will reload fail on the other node #18285

Merged
merged 8 commits into from
Jul 2, 2020

Conversation

AilinKid
Copy link
Contributor

@AilinKid AilinKid commented Jun 30, 2020

What problem does this PR solve?

Problem Summary:
Now in TiDB cluster, if config one node as repair-mode node, the other node could not reload the new schema info after the table has been repaired.

The reason is because
repair-node: information schema has filtered the repaired tables, so for itself, apply the create table logic is ok.
other-node: information schema has included the repaired tables, so for them, should apply the drop table first, then create new table.

Root cause:
sortedTablesBuckets will append the new repaired table into it, when find table with TableByID, then binary search will get the first old one rather the new one with same table ID (so should drop it first)

What is changed and how it works?

How it Works: drop old table first when apply repaired table in applydiff

Related changes

  • Need to cherry-pick to the release branch

Check List

Tests

  • Manual test (add detailed scripts or steps below)
1: tiup playgroumd --db 2 
2: kill the tidb node & subscribe the bin and start it again
3: restart one node with repair-mode
4: conduct the repair SQL on it
5: check whether the node could reload the repaired table 
6: check whether the other node could reload the repaired table 

Release note

  • ddl: fix admin repair table will reload fail on the other node

@AilinKid AilinKid added sig/sql-infra SIG: SQL Infra type/bugfix This PR fixes a bug. labels Jun 30, 2020
@codecov
Copy link

codecov bot commented Jun 30, 2020

Codecov Report

Merging #18285 into master will increase coverage by 0.0122%.
The diff coverage is 100.0000%.

@@               Coverage Diff                @@
##             master     #18285        +/-   ##
================================================
+ Coverage   79.4536%   79.4659%   +0.0122%     
================================================
  Files           535        535                
  Lines        144274     144282         +8     
================================================
+ Hits         114631     114655        +24     
+ Misses        20364      20358         -6     
+ Partials       9279       9269        -10     

Copy link
Contributor

@djshow832 djshow832 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ti-srebot
Copy link
Contributor

@djshow832,Thanks for you review.

Copy link
Member

@bb7133 bb7133 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ti-srebot
Copy link
Contributor

@bb7133,Thanks for you review.

@bb7133 bb7133 added require-LGT3 Indicates that the PR requires three LGTM. status/LGT2 Indicates that a PR has LGTM 2. needs-cherry-pick-4.0 labels Jul 1, 2020
@AilinKid
Copy link
Contributor Author

AilinKid commented Jul 1, 2020

/run-unit-test

@@ -75,7 +75,7 @@ func (b *Builder) ApplyDiff(m *meta.Meta, diff *model.SchemaDiff) ([]int64, erro
var allocs autoid.Allocators
if tableIDIsValid(oldTableID) {
if oldTableID == newTableID && diff.Type != model.ActionRenameTable &&
diff.Type != model.ActionExchangeTablePartition {
diff.Type != model.ActionExchangeTablePartition && diff.Type != model.ActionRepairTable {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a comment here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good, addressed

Signed-off-by: AilinKid <314806019@qq.com>
@AilinKid
Copy link
Contributor Author

AilinKid commented Jul 2, 2020

/run-all-tests

@AilinKid
Copy link
Contributor Author

AilinKid commented Jul 2, 2020

/run-all-tests

Signed-off-by: AilinKid <314806019@qq.com>
Copy link
Contributor

@zimulala zimulala left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ti-srebot ti-srebot removed the status/LGT2 Indicates that a PR has LGTM 2. label Jul 2, 2020
@ti-srebot
Copy link
Contributor

@zimulala,Thanks for you review.

@zimulala zimulala added the status/can-merge Indicates a PR has been approved by a committer. label Jul 2, 2020
@ti-srebot
Copy link
Contributor

Sorry @zimulala, you don't have permission to trigger auto merge event on this branch. The number of LGTM for this PR is 0 while it need 2

@zimulala zimulala added status/LGT3 The PR has already had 3 LGTM. status/can-merge Indicates a PR has been approved by a committer. and removed status/can-merge Indicates a PR has been approved by a committer. labels Jul 2, 2020
@ti-srebot
Copy link
Contributor

Sorry @zimulala, you don't have permission to trigger auto merge event on this branch. The number of LGTM for this PR is 0 while it need 2

@AilinKid
Copy link
Contributor Author

AilinKid commented Jul 2, 2020

/run-all-tests

@AilinKid AilinKid merged commit f31298f into pingcap:master Jul 2, 2020
ti-srebot pushed a commit to ti-srebot/tidb that referenced this pull request Jul 2, 2020
Signed-off-by: ti-srebot <ti-srebot@pingcap.com>
@ti-srebot
Copy link
Contributor

cherry pick to release-4.0 in PR #18323

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
require-LGT3 Indicates that the PR requires three LGTM. sig/sql-infra SIG: SQL Infra status/can-merge Indicates a PR has been approved by a committer. status/LGT3 The PR has already had 3 LGTM. type/bugfix This PR fixes a bug.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants