Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rbd: add a workaround to fix rbd snapshot scheduling #2656

Merged
merged 4 commits into from
Nov 19, 2021

Conversation

ShyamsundarR
Copy link
Contributor

@ShyamsundarR ShyamsundarR commented Nov 18, 2021

currently, we have a bug in the rbd mirror scheduling module. After doing failover and failback the scheduling is not getting updated and the mirroring snapshots are not getting created periodically as per the scheduling interval. This PR workaround this one by doing the below operations

  • Create a dummy (csi-vol-dummy-ceph fsID) image per cluster and this image should be easily identified.

  • During Promote operation on any image enables the mirroring on the dummy image. when we enable the mirroring on the dummy image the pool will get updated and the scheduling will be reconfigured.

  • During Demote operation on any image disables the mirroring on the dummy image. the disabled need to be done to enable the mirroring again when we get the promote request to make the image as the primary

  • When the DR is no more needed, this image needs to be manually cleanup for now as we don't want to add a check in the existing DeleteVolume code path for deleting dummy images as it impacts the performance of the DeleteVolume workflow.

Moved to add scheduling to the promote operation as scheduling need to be added when the image is promoted and this is the correct method of adding the scheduling to make the scheduling take place.

More details at https://bugzilla.redhat.com/show_bug.cgi?id=2019161
Signed-off-by: Madhu Rajanna madhupr007@gmail.com
Signed-off-by: Shyamsundar Ranganathan srangana@redhat.com

Moved to add scheduling to the promote
operation as scheduling need to be added
when the image is promoted and this is
the correct method of adding the scheduling
to make the scheduling take place.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added helper function to get the cluster ID.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
currently we have a bug in rbd mirror scheduling module.
After doing failover and failback the scheduling is not
getting updated and the mirroring snapshots are not
getting created periodically as per the scheduling
interval. This PR workarounds this one by doing below
operations

* Create a dummy (unique) image per cluster and this image
should be easily identified.

* During Promote operation on any image enable the
mirroring on the dummy image. when we enable the mirroring
on the dummy image the pool will get updated and the
scheduling will be reconfigured.

* During Demote operation on any image disable the mirroring
on the dummy image. the disable need to be done to enable
the mirroring again when we get the promote request to make
the image as primary

* When the DR is no more needed, this image need to be
manually cleanup as for now as we dont want to add a check
in the existing DeleteVolume code path for delete dummy image
as it impact the performance of the DeleteVolume workflow.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
@mergify mergify bot added component/rbd Issues related to RBD bug Something isn't working labels Nov 18, 2021
Copy link

@BenamarMk BenamarMk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I will test the change.

if err != nil {
return err
}
dummyImageOpsLock.Unlock()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unlock should be on defer? If there is any error in above steps the unlock never happens and we wount be able to get lock again?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, fixed now.

Copy link

@BenamarMk BenamarMk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ShyamsundarR I tested your patch and it works as expected. So it is fine other than the legitimate comment from @Madhu-1. We should be able to merge it once that is fixed.

The dummy mirror image needs to be disabled and then
reenabled for mirroring, to ensure a newly promoted
primary is now starting to schedule snapshots.

Signed-off-by: Shyamsundar Ranganathan <srangana@redhat.com>
@ShyamsundarR
Copy link
Contributor Author

Latest push was to add the "." to the end of the comment to address linter failure.

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Nov 19, 2021

@Mergifyio rebase

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Nov 19, 2021

/retest all

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Nov 19, 2021

@Mergifyio rebase

@Madhu-1 Madhu-1 added the Priority-0 highest priority issue label Nov 19, 2021
@Madhu-1
Copy link
Collaborator

Madhu-1 commented Nov 19, 2021

I1119 03:03:23.221198 1 connection.go:187] GRPC error: rpc error: code = InvalidArgument desc = invalid encryption kms configuration: failed connecting to Vault: failed to get the authentication token: Put "http://vault.cephcsi-e2e-ab3babb4a6ad.svc.cluster.local:8200/v1/auth/kubernetes/login": dial tcp: lookup vault.cephcsi-e2e-ab3babb4a6ad.svc.cluster.local: no such host

encryption tests are failing. not related to this PR.

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Nov 19, 2021

@humblec looks like cephcsi CI is broken due to a vault issue. am planning to merge this one manually. WDYT?

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Nov 19, 2021

I1119 03:03:23.221198 1 connection.go:187] GRPC error: rpc error: code = InvalidArgument desc = invalid encryption kms configuration: failed connecting to Vault: failed to get the authentication token: Put "http://vault.cephcsi-e2e-ab3babb4a6ad.svc.cluster.local:8200/v1/auth/kubernetes/login": dial tcp: lookup vault.cephcsi-e2e-ab3babb4a6ad.svc.cluster.local: no such host

encryption tests are failing. not related to this PR.

#2657

dummyImageCreated operation = "dummyImageCreated"
// Read write lock to ensure that only one operation is happening at a time.
operationLock = sync.Map{}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:extra line not needed, but we can live with that.

return nil, status.Errorf(codes.Internal, "failed to get mirroring mode %s", err.Error())
}

log.DebugLog(ctx, "Attempting to tickle dummy image for restarting RBD schedules")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Attempting/attempting

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Nov 19, 2021

@humblec are you expecting a change for the above comments as part of this PR?

@humblec
Copy link
Collaborator

humblec commented Nov 19, 2021

@ShyamsundarR I tested your patch and it works as expected. So it is fine other than the legitimate comment from @Madhu-1. We should be able to merge it once that is fixed.

Thanks @BenamarMk for confirming.

@humblec are you expecting a change for the above comments as part of this PR?

@Madhu-1 yeah, the CI failure is on encryption, considering the urgency of this PR, I am fine to merge manually 👍 , I have a couple of minor comments though.

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Nov 19, 2021

@ShyamsundarR I tested your patch and it works as expected. So it is fine other than the legitimate comment from @Madhu-1. We should be able to merge it once that is fixed.

Thanks @BenamarMk for confirming.

@humblec are you expecting a change for the above comments as part of this PR?

@Madhu-1 yeah, the CI failure is on encryption, considering the urgency of this PR, I am fine to merge manually +1 , I have a couple of minor comments though.

@humblec we can address that in a follow-up PR as it's not a blocker for this PR?

@humblec
Copy link
Collaborator

humblec commented Nov 19, 2021

@Madhu-1 @ShyamsundarR compared to previous PR, I couldnt get the corner case this PR fixed though. Appreciated if we can have a quick mention some where in this PR. It looks to me that, now the dummy image scheduling has been set to short/1minute period for other images to rearrange the reschedule ( may be more iterations for them to reschedule) compared to previous version, I could be wrong.

@humblec
Copy link
Collaborator

humblec commented Nov 19, 2021

@ShyamsundarR I tested your patch and it works as expected. So it is fine other than the legitimate comment from @Madhu-1. We should be able to merge it once that is fixed.

Thanks @BenamarMk for confirming.

@humblec are you expecting a change for the above comments as part of this PR?

@Madhu-1 yeah, the CI failure is on encryption, considering the urgency of this PR, I am fine to merge manually +1 , I have a couple of minor comments though.

@humblec we can address that in a follow-up PR as it's not a blocker for this PR?

sure, nw go ahead

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Nov 19, 2021

@Madhu-1 @ShyamsundarR compared to previous PR, I couldnt get the corner case this PR fixed though. Appreciated if we can have a quick mention some where in this PR. It looks to me that, now the dummy image scheduling has been set to short/1minute period for other images to rearrange the reschedule ( may be more iterations for them to reschedule) compared to previous version, I could be wrong.

the previous PR was not resetting the scheduling for all the rbd images the enable and disable was done only on one PromoteVolume. To reset the scheduling the enable and disable of mirroring need to be done on all the images. more detail are in this PR description

@BenamarMk
Copy link

BenamarMk commented Nov 19, 2021

Tested an app with multiple PVCs (for the same app) and failover/relocation multiple times and the fix worked.
I have noticed while testing (kind of) an issue that might resolve/or workaround the fsck issue (BZ:2021460). I'll investigate next.


interval, startTime := getSchedulingDetails(req.GetParameters())
if interval != admin.NoInterval {
err = rbdVol.addSnapshotScheduling(interval, startTime)
Copy link

@BenamarMk BenamarMk Nov 22, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ShyamsundarR @Madhu-1 we have a bug in this line. rbdVol was changed to rbdVol dummy object in line 527 changed. In this line, we still point to the same dummy rbdVol. Adding schedule here is adding it to the wrong image. I didn't catch this in my testing because my real image also has a schedule of 1m. If you change the real image schedule to something else, you will see the dummy image schedule added to "something else".

@Madhu-1 It is an easy fix. Can you make the change?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

got it. tested the fix making a PR.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Madhu-1 for providing the fix so quickly. I have tested patch #2669 and it works as intended.

Madhu-1 added a commit to Madhu-1/ceph-csi that referenced this pull request Dec 7, 2021
we added a workaround for rbd scheduling by creating
a dummy image in ceph#2656. with the fix we are creating
a dummy image of the size of the first actual rbd
image which is sent in EnableVolumeReplication request
if the actual rbd image size is 1Tib we are creating
a dummy image of 1Tib which is not good. even though
its a thin provisioned rbd images this is causing
issue for the transfer of the snapshot during
the mirroring operation.

This commit recreates the rbd image with 1Mib size
which is the smaller supported size in rbd.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Madhu-1 added a commit to Madhu-1/ceph-csi that referenced this pull request Dec 7, 2021
we added a workaround for rbd scheduling by creating
a dummy image in ceph#2656. with the fix we are creating
a dummy image of the size of the first actual rbd
image which is sent in EnableVolumeReplication request
if the actual rbd image size is 1TiB we are creating
a dummy image of 1TiB which is not good. even though
its a thin provisioned rbd images this is causing
issue for the transfer of the snapshot during
the mirroring operation.

This commit recreates the rbd image with 1MiB size
which is the smaller supported size in rbd.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Madhu-1 added a commit to Madhu-1/ceph-csi that referenced this pull request Dec 7, 2021
we added a workaround for rbd scheduling by creating
a dummy image in ceph#2656. with the fix we are creating
a dummy image of the size of the first actual rbd
image which is sent in EnableVolumeReplication request
if the actual rbd image size is 1TiB we are creating
a dummy image of 1TiB which is not good. even though
its a thin provisioned rbd images this is causing
issue for the transfer of the snapshot during
the mirroring operation.

This commit recreates the rbd image with 1MiB size
which is the smaller supported size in rbd.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Madhu-1 added a commit to Madhu-1/ceph-csi that referenced this pull request Dec 7, 2021
we added a workaround for rbd scheduling by creating
a dummy image in ceph#2656. with the fix we are creating
a dummy image of the size of the first actual rbd
image which is sent in EnableVolumeReplication request
if the actual rbd image size is 1TiB we are creating
a dummy image of 1TiB which is not good. even though
its a thin provisioned rbd images this is causing
issue for the transfer of the snapshot during
the mirroring operation.

This commit recreates the rbd image with 1MiB size
which is the smaller supported size in rbd.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Madhu-1 added a commit to Madhu-1/ceph-csi that referenced this pull request Dec 7, 2021
we added a workaround for rbd scheduling by creating
a dummy image in ceph#2656. with the fix we are creating
a dummy image of the size of the first actual rbd
image which is sent in EnableVolumeReplication request
if the actual rbd image size is 1TiB we are creating
a dummy image of 1TiB which is not good. even though
its a thin provisioned rbd images this is causing
issue for the transfer of the snapshot during
the mirroring operation.

This commit recreates the rbd image with 1MiB size
which is the smaller supported size in rbd.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
mergify bot pushed a commit that referenced this pull request Dec 7, 2021
we added a workaround for rbd scheduling by creating
a dummy image in #2656. with the fix we are creating
a dummy image of the size of the first actual rbd
image which is sent in EnableVolumeReplication request
if the actual rbd image size is 1TiB we are creating
a dummy image of 1TiB which is not good. even though
its a thin provisioned rbd images this is causing
issue for the transfer of the snapshot during
the mirroring operation.

This commit recreates the rbd image with 1MiB size
which is the smaller supported size in rbd.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
openshift-cherrypick-robot pushed a commit to openshift-cherrypick-robot/ceph-csi that referenced this pull request Dec 8, 2021
we added a workaround for rbd scheduling by creating
a dummy image in ceph#2656. with the fix we are creating
a dummy image of the size of the first actual rbd
image which is sent in EnableVolumeReplication request
if the actual rbd image size is 1TiB we are creating
a dummy image of 1TiB which is not good. even though
its a thin provisioned rbd images this is causing
issue for the transfer of the snapshot during
the mirroring operation.

This commit recreates the rbd image with 1MiB size
which is the smaller supported size in rbd.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Madhu-1 added a commit to Madhu-1/ceph-csi that referenced this pull request Dec 14, 2021
we added a workaround for rbd scheduling by creating
a dummy image in ceph#2656. with the fix we are creating
a dummy image of the size of the first actual rbd
image which is sent in EnableVolumeReplication request
if the actual rbd image size is 1TiB we are creating
a dummy image of 1TiB which is not good. even though
its a thin provisioned rbd images this is causing
issue for the transfer of the snapshot during
the mirroring operation.

This commit recreates the rbd image with 1MiB size
which is the smaller supported size in rbd.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 9a4533e)
Madhu-1 added a commit to Madhu-1/ceph-csi that referenced this pull request Oct 5, 2022
To address the problem that snapshot
schedules are triggered for volumes
that are promoted, a dummy image was
disabled/enabled for replication.
This was done as a workaround, because
the promote operation was not triggering
the schedules for the image being promoted.

The bugs related to the same have been fixed in
RBD mirroring functionality and hence the
workaround ceph#2656 can be removed from the code base.

ceph tracker https://tracker.ceph.com/issues/53914

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Madhu-1 added a commit to Madhu-1/ceph-csi that referenced this pull request Oct 7, 2022
To address the problem that snapshot
schedules are triggered for volumes
that are promoted, a dummy image was
disabled/enabled for replication.
This was done as a workaround, because the
promote operation was not triggering
the schedules for the image being promoted.

The bugs related to the same have been fixed in
RBD mirroring functionality and hence the
workaround ceph#2656 can be removed from the code base.

ceph tracker https://tracker.ceph.com/issues/53914

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
mergify bot pushed a commit that referenced this pull request Oct 10, 2022
To address the problem that snapshot
schedules are triggered for volumes
that are promoted, a dummy image was
disabled/enabled for replication.
This was done as a workaround, because the
promote operation was not triggering
the schedules for the image being promoted.

The bugs related to the same have been fixed in
RBD mirroring functionality and hence the
workaround #2656 can be removed from the code base.

ceph tracker https://tracker.ceph.com/issues/53914

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Madhu-1 added a commit to Madhu-1/ceph-csi that referenced this pull request Oct 10, 2022
To address the problem that snapshot
schedules are triggered for volumes
that are promoted, a dummy image was
disabled/enabled for replication.
This was done as a workaround, because the
promote operation was not triggering
the schedules for the image being promoted.

The bugs related to the same have been fixed in
RBD mirroring functionality and hence the
workaround ceph#2656 can be removed from the code base.

ceph tracker https://tracker.ceph.com/issues/53914

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 71e5b3f)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working component/rbd Issues related to RBD Priority-0 highest priority issue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants