Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Snapshot / Restore Volume Support for Kubernetes (CRD + External Controller) #177

Closed
jingxu97 opened this issue Jan 23, 2017 · 112 comments
Closed
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/storage Categorizes an issue or PR as relevant to SIG Storage. stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status

Comments

@jingxu97
Copy link
Contributor

jingxu97 commented Jan 23, 2017

Feature Description

  • One-line feature description (can be used as a release note):
Snapshot / restore functionality for Kubernetes and CSI. This provides standardized APIs design (CRDs) and adds PV snapshot/restore support for CSI volume drivers.

Old description:

Expose the ability in the Kubernetes API to create,list, delete, and restore snapshots from an arbitrary underlying storage systems that support it.
@jingxu97 jingxu97 added this to the next-milestone milestone Jan 23, 2017
@mdelio mdelio added the sig/storage Categorizes an issue or PR as relevant to SIG Storage. label Jan 24, 2017
@davidopp
Copy link
Member

@timothysc
Copy link
Member

/cc @skriss

@mdelio
Copy link

mdelio commented May 4, 2017

@jingxu97 I think we're going to have something in alpha for 1.7, can we can we please set the milestone to 1.7?

@calebamiles calebamiles modified the milestones: v1.7, next-milestone May 4, 2017
@calebamiles calebamiles added the stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status label May 4, 2017
@calebamiles
Copy link
Contributor

@kubernetes/sig-storage-feature-requests could someone please update the issue description to the new template. Thanks!

@jingxu97
Copy link
Contributor Author

jingxu97 commented May 4, 2017 via email

@alkar
Copy link

alkar commented May 9, 2017

@jingxu97
Copy link
Contributor Author

jingxu97 commented May 9, 2017 via email

@alkar
Copy link

alkar commented May 10, 2017

@jingxu97 thanks! I can't access it but I just requested access.

@yanivlavi
Copy link

@jingxu97 I wanted to add an important note here regarding overall approach. I'm not this is the right place to put this, but I would be a happy to be guided to the right forum to bring this up.

Coming from a long background in oVirt and OSP I feel there is an important aspect that needs to be discuss which is ownership of the state of snapshots (and overall volume metadata) and discoverablility of the this metadata.

In cloud and on premise users (developers and admins) might prefer the native storage APIs over the Kube APIs. Kube is also not the one creating the snapshot, it is the cloud service or the storage itself, therefore it is not the owner of this metadata.

In OSP they have already made this mistake with Cinder that forces users that want to use snapshots to go through the Cinder API to have it be available in the OSP environment. Cinder doesn't know snapshot created on the storage and if you lose the Cinder you lose everything as the metadata that counts is in his stateful DB.

What I'm trying to say is that it is important that Kube doesn't try to be the owner of the volume metadata, which means things like periodically checking if a new snapshot was created directly via the storage service API or that the ID used for the snapshot is the storage service snapshot ID.

It is clear that Kube should expose snapshotting since it very needed to the container use case, but it is very important it doesn't become a storage abstraction service like Cinder. We do not want to chase the storage service features or limit user to use the Kube API, it should be a option. We also want to allow discovery of volumes no matter where they where created.

As we get to more complex features like QOS and oversubscription for example, we want to allow exposing and reusing the storage service capabilities, not replace them or block users from using them via the cloud service or storage management APIs.

@idvoretskyi idvoretskyi added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label May 18, 2017
@idvoretskyi
Copy link
Member

@jingxu97 any progress on the feature description? @kubernetes/sig-storage-feature-requests

@idvoretskyi
Copy link
Member

@mdelio @jingxu97 please, update the feature description with the new template - https://github.com/kubernetes/features/blob/master/ISSUE_TEMPLATE.md

@saad-ali saad-ali modified the milestones: next-milestone, v1.7 Jun 15, 2017
@saad-ali
Copy link
Member

@idvoretskyi I updated the original comment with the new template. This feature is not actually shipping any bits in the Kubernetes core for v1.7, so therefore I moved it to the next-milestone. This means that it will not need documentation, etc. for 1.7 in the Kubernetes core. I will remove the feature from the 1.7 tracking board as well.

@saad-ali saad-ali modified the milestones: 1.8, next-milestone Jul 12, 2017
@idvoretskyi idvoretskyi added the kind/feature Categorizes issue or PR as related to a new feature. label Jul 25, 2017
@idvoretskyi
Copy link
Member

@saad-ali any updates for 1.8? Is this feature still on track for the release?

@rootfs
Copy link

rootfs commented Sep 5, 2017

cc @tsmetana

@childsb
Copy link
Contributor

childsb commented Sep 7, 2017

@idvoretskyi this is still on track: kubernetes-retired/external-storage#331

@jdumars
Copy link
Member

jdumars commented Sep 15, 2017

@jingxu97 @rootfs any update on missing docs for this? PR is due today.

@xing-yang
Copy link
Contributor

Thanks @eagleusb ! I'll submit a placeholder doc soon.

@eagleusb
Copy link

eagleusb commented Nov 2, 2020

Hi @xing-yang 👋

Thanks for your update. In the meantime, the docs placeholder deadline is almost here.

Please make sure to create a placeholder PR against the dev-1.20 branch in the k/website before the deadline.

Also, please keep in mind the important upcoming dates:

@xing-yang
Copy link
Contributor

Hi @eagleusb ,

Thanks for the reminder! Doc PR is submitted here: kubernetes/website#24849

@kikisdeliveryservice
Copy link
Member

Hi @xing-yang

Looks like kubernetes/kubernetes#95282 is still open but being actively worked on. Just a reminder that Code Freeze is coming up in 2 days on Thursday, November 12th. All PRs must be merged by that date, otherwise an Exception is required.

Best,
Kirsten

@xing-yang
Copy link
Contributor

Hi @kikisdeliveryservice ,

Thanks for the reminder! We are trying to get reviewers to finish reviewing and approving the PR by the 11/12 deadline.

Xing

@xing-yang
Copy link
Contributor

xing-yang commented Nov 11, 2020

This PR that updates snapshot CRDs to v1 for cluster addon is merged: kubernetes/kubernetes#96383
We are moving closer.

@kikisdeliveryservice
Copy link
Member

Great! just waiting on kubernetes/kubernetes#95282

@xing-yang
Copy link
Contributor

@kikisdeliveryservice ,

kubernetes/kubernetes#95282 is approved. Just waiting for it to be merged:).

@kikisdeliveryservice
Copy link
Member

Yay! It's merged! Updating tracking sheet.

Congrats! 🎆

@xing-yang
Copy link
Contributor

Thanks @kikisdeliveryservice!

@kikisdeliveryservice
Copy link
Member

Hi @xing-yang

Can you update the kep.yaml to reflect a status of implemented:

Once that merges we can then close this issue.

Thanks!
Kirsten

@xing-yang
Copy link
Contributor

Hi @kikisdeliveryservice,

Will submit a PR soon. Thanks!

Xing

@kikisdeliveryservice
Copy link
Member

Thanks @xing-yang it's merged! Feel free to close this issue 😄

@annajung annajung removed this from the v1.20 milestone Jan 7, 2021
@annajung annajung removed stage/beta Denotes an issue tracking an enhancement targeted for Beta status tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team labels Jan 7, 2021
@annajung
Copy link
Contributor

annajung commented Jan 7, 2021

Hello, 1.21 Enhancement lead here.
I'm closing out this issue since the enhancement is GA and KEP has been updated to implemented.

/close

@k8s-ci-robot
Copy link
Contributor

@annajung: Closing this issue.

In response to this:

Hello, 1.21 Enhancement lead here.
I'm closing out this issue since the enhancement is GA and KEP has been updated to implemented.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mickeyboxell
Copy link

Hi @jingxu97 👋 1.24 RT Comms lead here. I saw a note that the VolumeSnapshot v1beta1 CRD will be removed in 1.24. Would this be appropriate to include in our 1.24 Removals and Deprecations blog post?

@jingxu97
Copy link
Contributor Author

jingxu97 commented Mar 16, 2022

Yes, I think so @mickeyboxell

@xing-yang

@mickeyboxell
Copy link

Thanks for confirming! @jingxu97 What information would you like communicated in the blog? I read that the functionality entered beta in 1.20 and was a little confused about the v1beta1 CRD now being removed. Did the project graduate to stable or was it replaced with an alternative API?

@jingxu97
Copy link
Contributor Author

jingxu97 commented Mar 16, 2022

@xing-yang
Copy link
Contributor

@mickeyboxell I added that entry in the spreadsheet. VolumeSnapshot went GA in 1.20. Following K8s 1.21 release, we deprecated VolumeSnapshot v1beta1. Since VolumeSnapshot is out-of-tree CRD, we have the deprecation message in the release note here:
https://github.com/kubernetes-csi/external-snapshotter/releases/tag/v4.1.0

Now we are ready to remove VolumeSnapshot v1beta1 CRD in our next external-snapshotter release which will be v6.0, shortly after K8s 1.24 release.

We want to add a message in the deprecation/removal blog to indicate that VolumeSnapshot v1beta1 CRD will be removed in K8s 1.24.

Hope this helps.

@xing-yang
Copy link
Contributor

@mickeyboxell In addition to the deprecation/removal blog, I'd also like to have a release note in K8s v1.24 release notes to indicate VolumeSnapshot v1beta1 CRD will be removed. Where can I add that release note?

@mickeyboxell
Copy link

I'm not sure how their process works. You may want to reach out to the #release-notes channel for more information.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/storage Categorizes an issue or PR as relevant to SIG Storage. stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status
Projects
None yet
Development

No branches or pull requests